00:00:00.000 Started by upstream project "autotest-per-patch" build number 127117 00:00:00.000 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.107 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.107 The recommended git tool is: git 00:00:00.108 using credential 00000000-0000-0000-0000-000000000002 00:00:00.110 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.154 Fetching changes from the remote Git repository 00:00:00.156 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.212 Using shallow fetch with depth 1 00:00:00.212 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.212 > git --version # timeout=10 00:00:00.236 > git --version # 'git version 2.39.2' 00:00:00.236 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.253 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.253 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.462 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.472 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.483 Checking out Revision f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 (FETCH_HEAD) 00:00:06.483 > git config core.sparsecheckout # timeout=10 00:00:06.493 > git read-tree -mu HEAD # timeout=10 00:00:06.508 > git checkout -f f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 # timeout=5 00:00:06.565 Commit message: "spdk-abi-per-patch: fix check-so-deps-docker-autotest parameters" 00:00:06.565 > git rev-list --no-walk f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 # timeout=10 00:00:06.673 [Pipeline] Start of Pipeline 00:00:06.684 [Pipeline] library 00:00:06.685 Loading library shm_lib@master 00:00:06.685 Library shm_lib@master is cached. Copying from home. 00:00:06.699 [Pipeline] node 00:00:06.707 Running on VM-host-SM17 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2 00:00:06.708 [Pipeline] { 00:00:06.716 [Pipeline] catchError 00:00:06.717 [Pipeline] { 00:00:06.726 [Pipeline] wrap 00:00:06.733 [Pipeline] { 00:00:06.738 [Pipeline] stage 00:00:06.739 [Pipeline] { (Prologue) 00:00:06.755 [Pipeline] echo 00:00:06.756 Node: VM-host-SM17 00:00:06.762 [Pipeline] cleanWs 00:00:06.770 [WS-CLEANUP] Deleting project workspace... 00:00:06.770 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.776 [WS-CLEANUP] done 00:00:06.938 [Pipeline] setCustomBuildProperty 00:00:07.018 [Pipeline] httpRequest 00:00:07.033 [Pipeline] echo 00:00:07.034 Sorcerer 10.211.164.101 is alive 00:00:07.040 [Pipeline] httpRequest 00:00:07.044 HttpMethod: GET 00:00:07.045 URL: http://10.211.164.101/packages/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:07.046 Sending request to url: http://10.211.164.101/packages/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:07.061 Response Code: HTTP/1.1 200 OK 00:00:07.062 Success: Status code 200 is in the accepted range: 200,404 00:00:07.062 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:09.007 [Pipeline] sh 00:00:09.291 + tar --no-same-owner -xf jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:09.307 [Pipeline] httpRequest 00:00:09.335 [Pipeline] echo 00:00:09.336 Sorcerer 10.211.164.101 is alive 00:00:09.349 [Pipeline] httpRequest 00:00:09.354 HttpMethod: GET 00:00:09.355 URL: http://10.211.164.101/packages/spdk_68f79842378fdd3ebc3795ae0c42ef8e24177970.tar.gz 00:00:09.355 Sending request to url: http://10.211.164.101/packages/spdk_68f79842378fdd3ebc3795ae0c42ef8e24177970.tar.gz 00:00:09.370 Response Code: HTTP/1.1 200 OK 00:00:09.370 Success: Status code 200 is in the accepted range: 200,404 00:00:09.371 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/spdk_68f79842378fdd3ebc3795ae0c42ef8e24177970.tar.gz 00:00:36.084 [Pipeline] sh 00:00:36.364 + tar --no-same-owner -xf spdk_68f79842378fdd3ebc3795ae0c42ef8e24177970.tar.gz 00:00:39.662 [Pipeline] sh 00:00:39.940 + git -C spdk log --oneline -n5 00:00:39.940 68f798423 scripts/perf: Remove vhost/common.sh source from run_vhost_test.sh 00:00:39.940 8711e7e9b autotest: reduce accel tests runs with SPDK_TEST_ACCEL flag 00:00:39.940 50222f810 configure: don't exit on non Intel platforms 00:00:39.940 78cbcfdde test/scheduler: fix cpu mask for rpc governor tests 00:00:39.940 ba69d4678 event/scheduler: remove custom opts from static scheduler 00:00:39.960 [Pipeline] writeFile 00:00:39.979 [Pipeline] sh 00:00:40.259 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:40.271 [Pipeline] sh 00:00:40.552 + cat autorun-spdk.conf 00:00:40.552 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:40.552 SPDK_TEST_NVMF=1 00:00:40.552 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:40.552 SPDK_TEST_URING=1 00:00:40.552 SPDK_TEST_USDT=1 00:00:40.552 SPDK_RUN_UBSAN=1 00:00:40.552 NET_TYPE=virt 00:00:40.552 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:40.559 RUN_NIGHTLY=0 00:00:40.561 [Pipeline] } 00:00:40.578 [Pipeline] // stage 00:00:40.593 [Pipeline] stage 00:00:40.596 [Pipeline] { (Run VM) 00:00:40.611 [Pipeline] sh 00:00:40.893 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:40.893 + echo 'Start stage prepare_nvme.sh' 00:00:40.893 Start stage prepare_nvme.sh 00:00:40.893 + [[ -n 2 ]] 00:00:40.893 + disk_prefix=ex2 00:00:40.893 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2 ]] 00:00:40.893 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/autorun-spdk.conf ]] 00:00:40.893 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/autorun-spdk.conf 00:00:40.893 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:40.893 ++ SPDK_TEST_NVMF=1 00:00:40.893 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:40.893 ++ SPDK_TEST_URING=1 00:00:40.893 ++ SPDK_TEST_USDT=1 00:00:40.893 ++ SPDK_RUN_UBSAN=1 00:00:40.893 ++ NET_TYPE=virt 00:00:40.893 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:40.893 ++ RUN_NIGHTLY=0 00:00:40.893 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2 00:00:40.893 + nvme_files=() 00:00:40.893 + declare -A nvme_files 00:00:40.893 + backend_dir=/var/lib/libvirt/images/backends 00:00:40.893 + nvme_files['nvme.img']=5G 00:00:40.893 + nvme_files['nvme-cmb.img']=5G 00:00:40.893 + nvme_files['nvme-multi0.img']=4G 00:00:40.893 + nvme_files['nvme-multi1.img']=4G 00:00:40.893 + nvme_files['nvme-multi2.img']=4G 00:00:40.893 + nvme_files['nvme-openstack.img']=8G 00:00:40.893 + nvme_files['nvme-zns.img']=5G 00:00:40.893 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:40.893 + (( SPDK_TEST_FTL == 1 )) 00:00:40.893 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:40.893 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:40.893 + for nvme in "${!nvme_files[@]}" 00:00:40.893 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi2.img -s 4G 00:00:40.893 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:40.893 + for nvme in "${!nvme_files[@]}" 00:00:40.893 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-cmb.img -s 5G 00:00:40.893 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:40.893 + for nvme in "${!nvme_files[@]}" 00:00:40.893 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-openstack.img -s 8G 00:00:40.893 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:40.893 + for nvme in "${!nvme_files[@]}" 00:00:40.893 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-zns.img -s 5G 00:00:40.893 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:40.893 + for nvme in "${!nvme_files[@]}" 00:00:40.893 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi1.img -s 4G 00:00:40.893 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:40.893 + for nvme in "${!nvme_files[@]}" 00:00:40.893 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi0.img -s 4G 00:00:40.893 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:40.893 + for nvme in "${!nvme_files[@]}" 00:00:40.893 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme.img -s 5G 00:00:41.852 Formatting '/var/lib/libvirt/images/backends/ex2-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:41.852 ++ sudo grep -rl ex2-nvme.img /etc/libvirt/qemu 00:00:41.852 + echo 'End stage prepare_nvme.sh' 00:00:41.852 End stage prepare_nvme.sh 00:00:41.864 [Pipeline] sh 00:00:42.144 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:42.144 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex2-nvme.img -b /var/lib/libvirt/images/backends/ex2-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img -H -a -v -f fedora38 00:00:42.144 00:00:42.144 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/spdk/scripts/vagrant 00:00:42.144 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/spdk 00:00:42.144 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2 00:00:42.144 HELP=0 00:00:42.144 DRY_RUN=0 00:00:42.144 NVME_FILE=/var/lib/libvirt/images/backends/ex2-nvme.img,/var/lib/libvirt/images/backends/ex2-nvme-multi0.img, 00:00:42.144 NVME_DISKS_TYPE=nvme,nvme, 00:00:42.144 NVME_AUTO_CREATE=0 00:00:42.144 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img, 00:00:42.144 NVME_CMB=,, 00:00:42.144 NVME_PMR=,, 00:00:42.144 NVME_ZNS=,, 00:00:42.144 NVME_MS=,, 00:00:42.144 NVME_FDP=,, 00:00:42.144 SPDK_VAGRANT_DISTRO=fedora38 00:00:42.144 SPDK_VAGRANT_VMCPU=10 00:00:42.144 SPDK_VAGRANT_VMRAM=12288 00:00:42.144 SPDK_VAGRANT_PROVIDER=libvirt 00:00:42.144 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:42.144 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:42.144 SPDK_OPENSTACK_NETWORK=0 00:00:42.144 VAGRANT_PACKAGE_BOX=0 00:00:42.144 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/spdk/scripts/vagrant/Vagrantfile 00:00:42.144 FORCE_DISTRO=true 00:00:42.144 VAGRANT_BOX_VERSION= 00:00:42.144 EXTRA_VAGRANTFILES= 00:00:42.144 NIC_MODEL=e1000 00:00:42.144 00:00:42.144 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/fedora38-libvirt' 00:00:42.144 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/fedora38-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2 00:00:45.500 Bringing machine 'default' up with 'libvirt' provider... 00:00:46.442 ==> default: Creating image (snapshot of base box volume). 00:00:46.442 ==> default: Creating domain with the following settings... 00:00:46.442 ==> default: -- Name: fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721856150_9ebe8e1b971955908709 00:00:46.442 ==> default: -- Domain type: kvm 00:00:46.442 ==> default: -- Cpus: 10 00:00:46.442 ==> default: -- Feature: acpi 00:00:46.442 ==> default: -- Feature: apic 00:00:46.442 ==> default: -- Feature: pae 00:00:46.442 ==> default: -- Memory: 12288M 00:00:46.442 ==> default: -- Memory Backing: hugepages: 00:00:46.442 ==> default: -- Management MAC: 00:00:46.442 ==> default: -- Loader: 00:00:46.442 ==> default: -- Nvram: 00:00:46.442 ==> default: -- Base box: spdk/fedora38 00:00:46.442 ==> default: -- Storage pool: default 00:00:46.442 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721856150_9ebe8e1b971955908709.img (20G) 00:00:46.442 ==> default: -- Volume Cache: default 00:00:46.442 ==> default: -- Kernel: 00:00:46.442 ==> default: -- Initrd: 00:00:46.442 ==> default: -- Graphics Type: vnc 00:00:46.442 ==> default: -- Graphics Port: -1 00:00:46.442 ==> default: -- Graphics IP: 127.0.0.1 00:00:46.442 ==> default: -- Graphics Password: Not defined 00:00:46.442 ==> default: -- Video Type: cirrus 00:00:46.442 ==> default: -- Video VRAM: 9216 00:00:46.442 ==> default: -- Sound Type: 00:00:46.442 ==> default: -- Keymap: en-us 00:00:46.442 ==> default: -- TPM Path: 00:00:46.443 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:46.443 ==> default: -- Command line args: 00:00:46.443 ==> default: -> value=-device, 00:00:46.443 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:00:46.443 ==> default: -> value=-drive, 00:00:46.443 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme.img,if=none,id=nvme-0-drive0, 00:00:46.443 ==> default: -> value=-device, 00:00:46.443 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:46.443 ==> default: -> value=-device, 00:00:46.443 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:00:46.443 ==> default: -> value=-drive, 00:00:46.443 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:00:46.443 ==> default: -> value=-device, 00:00:46.443 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:46.443 ==> default: -> value=-drive, 00:00:46.443 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:00:46.443 ==> default: -> value=-device, 00:00:46.443 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:46.443 ==> default: -> value=-drive, 00:00:46.443 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:00:46.443 ==> default: -> value=-device, 00:00:46.443 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:46.443 ==> default: Creating shared folders metadata... 00:00:46.443 ==> default: Starting domain. 00:00:48.344 ==> default: Waiting for domain to get an IP address... 00:01:03.230 ==> default: Waiting for SSH to become available... 00:01:05.132 ==> default: Configuring and enabling network interfaces... 00:01:09.321 default: SSH address: 192.168.121.154:22 00:01:09.321 default: SSH username: vagrant 00:01:09.321 default: SSH auth method: private key 00:01:11.227 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:19.338 ==> default: Mounting SSHFS shared folder... 00:01:20.718 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:01:20.718 ==> default: Checking Mount.. 00:01:22.106 ==> default: Folder Successfully Mounted! 00:01:22.106 ==> default: Running provisioner: file... 00:01:23.040 default: ~/.gitconfig => .gitconfig 00:01:23.298 00:01:23.298 SUCCESS! 00:01:23.298 00:01:23.299 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/fedora38-libvirt and type "vagrant ssh" to use. 00:01:23.299 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:23.299 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/fedora38-libvirt" to destroy all trace of vm. 00:01:23.299 00:01:23.308 [Pipeline] } 00:01:23.326 [Pipeline] // stage 00:01:23.336 [Pipeline] dir 00:01:23.336 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/fedora38-libvirt 00:01:23.338 [Pipeline] { 00:01:23.353 [Pipeline] catchError 00:01:23.354 [Pipeline] { 00:01:23.368 [Pipeline] sh 00:01:23.647 + vagrant ssh-config --host vagrant 00:01:23.647 + sed -ne /^Host/,$p 00:01:23.647 + tee ssh_conf 00:01:27.841 Host vagrant 00:01:27.841 HostName 192.168.121.154 00:01:27.841 User vagrant 00:01:27.841 Port 22 00:01:27.841 UserKnownHostsFile /dev/null 00:01:27.841 StrictHostKeyChecking no 00:01:27.841 PasswordAuthentication no 00:01:27.841 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1716830599-074-updated-1705279005/libvirt/fedora38 00:01:27.841 IdentitiesOnly yes 00:01:27.841 LogLevel FATAL 00:01:27.841 ForwardAgent yes 00:01:27.841 ForwardX11 yes 00:01:27.841 00:01:27.854 [Pipeline] withEnv 00:01:27.857 [Pipeline] { 00:01:27.873 [Pipeline] sh 00:01:28.155 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:28.155 source /etc/os-release 00:01:28.155 [[ -e /image.version ]] && img=$(< /image.version) 00:01:28.155 # Minimal, systemd-like check. 00:01:28.155 if [[ -e /.dockerenv ]]; then 00:01:28.155 # Clear garbage from the node's name: 00:01:28.155 # agt-er_autotest_547-896 -> autotest_547-896 00:01:28.155 # $HOSTNAME is the actual container id 00:01:28.155 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:28.155 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:28.155 # We can assume this is a mount from a host where container is running, 00:01:28.155 # so fetch its hostname to easily identify the target swarm worker. 00:01:28.155 container="$(< /etc/hostname) ($agent)" 00:01:28.155 else 00:01:28.155 # Fallback 00:01:28.155 container=$agent 00:01:28.155 fi 00:01:28.155 fi 00:01:28.155 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:28.155 00:01:28.426 [Pipeline] } 00:01:28.450 [Pipeline] // withEnv 00:01:28.459 [Pipeline] setCustomBuildProperty 00:01:28.474 [Pipeline] stage 00:01:28.476 [Pipeline] { (Tests) 00:01:28.495 [Pipeline] sh 00:01:28.773 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:28.787 [Pipeline] sh 00:01:29.066 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:29.082 [Pipeline] timeout 00:01:29.082 Timeout set to expire in 30 min 00:01:29.084 [Pipeline] { 00:01:29.101 [Pipeline] sh 00:01:29.413 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:29.983 HEAD is now at 68f798423 scripts/perf: Remove vhost/common.sh source from run_vhost_test.sh 00:01:29.996 [Pipeline] sh 00:01:30.277 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:30.549 [Pipeline] sh 00:01:30.831 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:30.848 [Pipeline] sh 00:01:31.127 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:01:31.127 ++ readlink -f spdk_repo 00:01:31.387 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:31.387 + [[ -n /home/vagrant/spdk_repo ]] 00:01:31.387 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:31.387 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:31.387 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:31.387 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:31.387 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:31.387 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:01:31.387 + cd /home/vagrant/spdk_repo 00:01:31.387 + source /etc/os-release 00:01:31.387 ++ NAME='Fedora Linux' 00:01:31.387 ++ VERSION='38 (Cloud Edition)' 00:01:31.387 ++ ID=fedora 00:01:31.387 ++ VERSION_ID=38 00:01:31.387 ++ VERSION_CODENAME= 00:01:31.387 ++ PLATFORM_ID=platform:f38 00:01:31.387 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:31.387 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:31.387 ++ LOGO=fedora-logo-icon 00:01:31.387 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:31.387 ++ HOME_URL=https://fedoraproject.org/ 00:01:31.387 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:31.387 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:31.387 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:31.387 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:31.387 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:31.387 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:31.387 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:31.387 ++ SUPPORT_END=2024-05-14 00:01:31.387 ++ VARIANT='Cloud Edition' 00:01:31.387 ++ VARIANT_ID=cloud 00:01:31.387 + uname -a 00:01:31.387 Linux fedora38-cloud-1716830599-074-updated-1705279005 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:31.387 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:31.646 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:31.646 Hugepages 00:01:31.646 node hugesize free / total 00:01:31.646 node0 1048576kB 0 / 0 00:01:31.646 node0 2048kB 0 / 0 00:01:31.646 00:01:31.646 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:31.906 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:31.906 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:31.906 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:01:31.906 + rm -f /tmp/spdk-ld-path 00:01:31.906 + source autorun-spdk.conf 00:01:31.906 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:31.906 ++ SPDK_TEST_NVMF=1 00:01:31.906 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:31.906 ++ SPDK_TEST_URING=1 00:01:31.906 ++ SPDK_TEST_USDT=1 00:01:31.906 ++ SPDK_RUN_UBSAN=1 00:01:31.906 ++ NET_TYPE=virt 00:01:31.906 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:31.906 ++ RUN_NIGHTLY=0 00:01:31.906 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:31.906 + [[ -n '' ]] 00:01:31.906 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:31.906 + for M in /var/spdk/build-*-manifest.txt 00:01:31.906 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:31.906 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:31.906 + for M in /var/spdk/build-*-manifest.txt 00:01:31.906 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:31.906 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:31.906 ++ uname 00:01:31.906 + [[ Linux == \L\i\n\u\x ]] 00:01:31.906 + sudo dmesg -T 00:01:31.906 + sudo dmesg --clear 00:01:31.906 + sudo dmesg -Tw 00:01:31.906 + dmesg_pid=5103 00:01:31.906 + [[ Fedora Linux == FreeBSD ]] 00:01:31.906 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:31.906 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:31.906 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:31.906 + [[ -x /usr/src/fio-static/fio ]] 00:01:31.906 + export FIO_BIN=/usr/src/fio-static/fio 00:01:31.906 + FIO_BIN=/usr/src/fio-static/fio 00:01:31.906 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:31.906 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:31.906 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:31.906 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:31.906 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:31.906 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:31.906 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:31.906 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:31.906 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:31.906 Test configuration: 00:01:31.906 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:31.906 SPDK_TEST_NVMF=1 00:01:31.906 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:31.906 SPDK_TEST_URING=1 00:01:31.906 SPDK_TEST_USDT=1 00:01:31.906 SPDK_RUN_UBSAN=1 00:01:31.906 NET_TYPE=virt 00:01:31.906 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:31.906 RUN_NIGHTLY=0 21:23:16 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:31.906 21:23:16 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:31.906 21:23:16 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:31.906 21:23:16 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:31.906 21:23:16 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:31.906 21:23:16 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:32.165 21:23:16 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:32.165 21:23:16 -- paths/export.sh@5 -- $ export PATH 00:01:32.165 21:23:16 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:32.165 21:23:16 -- common/autobuild_common.sh@446 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:32.165 21:23:16 -- common/autobuild_common.sh@447 -- $ date +%s 00:01:32.165 21:23:16 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721856196.XXXXXX 00:01:32.165 21:23:16 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721856196.XgX3XT 00:01:32.165 21:23:16 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:01:32.165 21:23:16 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:01:32.165 21:23:16 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:32.165 21:23:16 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:32.166 21:23:16 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:32.166 21:23:16 -- common/autobuild_common.sh@463 -- $ get_config_params 00:01:32.166 21:23:16 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:01:32.166 21:23:16 -- common/autotest_common.sh@10 -- $ set +x 00:01:32.166 21:23:16 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring' 00:01:32.166 21:23:16 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:01:32.166 21:23:16 -- pm/common@17 -- $ local monitor 00:01:32.166 21:23:16 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:32.166 21:23:16 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:32.166 21:23:16 -- pm/common@25 -- $ sleep 1 00:01:32.166 21:23:16 -- pm/common@21 -- $ date +%s 00:01:32.166 21:23:16 -- pm/common@21 -- $ date +%s 00:01:32.166 21:23:16 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721856196 00:01:32.166 21:23:16 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721856196 00:01:32.166 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721856196_collect-vmstat.pm.log 00:01:32.166 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721856196_collect-cpu-load.pm.log 00:01:33.102 21:23:17 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:01:33.102 21:23:17 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:33.102 21:23:17 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:33.102 21:23:17 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:33.102 21:23:17 -- spdk/autobuild.sh@16 -- $ date -u 00:01:33.102 Wed Jul 24 09:23:17 PM UTC 2024 00:01:33.102 21:23:17 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:33.102 v24.09-pre-312-g68f798423 00:01:33.102 21:23:17 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:33.102 21:23:17 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:33.102 21:23:17 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:33.102 21:23:17 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:33.102 21:23:17 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:33.102 21:23:17 -- common/autotest_common.sh@10 -- $ set +x 00:01:33.102 ************************************ 00:01:33.102 START TEST ubsan 00:01:33.102 ************************************ 00:01:33.102 using ubsan 00:01:33.102 21:23:17 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:01:33.102 00:01:33.102 real 0m0.000s 00:01:33.102 user 0m0.000s 00:01:33.102 sys 0m0.000s 00:01:33.102 21:23:17 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:33.102 21:23:17 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:33.102 ************************************ 00:01:33.102 END TEST ubsan 00:01:33.102 ************************************ 00:01:33.102 21:23:18 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:33.102 21:23:18 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:33.102 21:23:18 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:33.102 21:23:18 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:33.102 21:23:18 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:33.102 21:23:18 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:33.102 21:23:18 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:33.102 21:23:18 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:33.102 21:23:18 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-shared 00:01:33.361 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:33.361 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:33.621 Using 'verbs' RDMA provider 00:01:49.463 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:01.659 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:01.659 Creating mk/config.mk...done. 00:02:01.659 Creating mk/cc.flags.mk...done. 00:02:01.659 Type 'make' to build. 00:02:01.659 21:23:45 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:02:01.659 21:23:45 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:01.659 21:23:45 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:01.659 21:23:45 -- common/autotest_common.sh@10 -- $ set +x 00:02:01.659 ************************************ 00:02:01.659 START TEST make 00:02:01.659 ************************************ 00:02:01.659 21:23:45 make -- common/autotest_common.sh@1125 -- $ make -j10 00:02:01.659 make[1]: Nothing to be done for 'all'. 00:02:11.633 The Meson build system 00:02:11.633 Version: 1.3.1 00:02:11.633 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:11.633 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:11.633 Build type: native build 00:02:11.633 Program cat found: YES (/usr/bin/cat) 00:02:11.633 Project name: DPDK 00:02:11.633 Project version: 24.03.0 00:02:11.633 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:11.633 C linker for the host machine: cc ld.bfd 2.39-16 00:02:11.633 Host machine cpu family: x86_64 00:02:11.633 Host machine cpu: x86_64 00:02:11.633 Message: ## Building in Developer Mode ## 00:02:11.633 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:11.633 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:11.633 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:11.633 Program python3 found: YES (/usr/bin/python3) 00:02:11.633 Program cat found: YES (/usr/bin/cat) 00:02:11.633 Compiler for C supports arguments -march=native: YES 00:02:11.633 Checking for size of "void *" : 8 00:02:11.633 Checking for size of "void *" : 8 (cached) 00:02:11.633 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:02:11.633 Library m found: YES 00:02:11.633 Library numa found: YES 00:02:11.633 Has header "numaif.h" : YES 00:02:11.633 Library fdt found: NO 00:02:11.633 Library execinfo found: NO 00:02:11.633 Has header "execinfo.h" : YES 00:02:11.633 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:11.633 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:11.633 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:11.633 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:11.633 Run-time dependency openssl found: YES 3.0.9 00:02:11.633 Run-time dependency libpcap found: YES 1.10.4 00:02:11.633 Has header "pcap.h" with dependency libpcap: YES 00:02:11.633 Compiler for C supports arguments -Wcast-qual: YES 00:02:11.633 Compiler for C supports arguments -Wdeprecated: YES 00:02:11.633 Compiler for C supports arguments -Wformat: YES 00:02:11.633 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:11.633 Compiler for C supports arguments -Wformat-security: NO 00:02:11.633 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:11.633 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:11.633 Compiler for C supports arguments -Wnested-externs: YES 00:02:11.633 Compiler for C supports arguments -Wold-style-definition: YES 00:02:11.633 Compiler for C supports arguments -Wpointer-arith: YES 00:02:11.633 Compiler for C supports arguments -Wsign-compare: YES 00:02:11.633 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:11.633 Compiler for C supports arguments -Wundef: YES 00:02:11.633 Compiler for C supports arguments -Wwrite-strings: YES 00:02:11.633 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:11.633 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:11.633 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:11.633 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:11.633 Program objdump found: YES (/usr/bin/objdump) 00:02:11.633 Compiler for C supports arguments -mavx512f: YES 00:02:11.633 Checking if "AVX512 checking" compiles: YES 00:02:11.633 Fetching value of define "__SSE4_2__" : 1 00:02:11.633 Fetching value of define "__AES__" : 1 00:02:11.633 Fetching value of define "__AVX__" : 1 00:02:11.633 Fetching value of define "__AVX2__" : 1 00:02:11.633 Fetching value of define "__AVX512BW__" : (undefined) 00:02:11.633 Fetching value of define "__AVX512CD__" : (undefined) 00:02:11.633 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:11.633 Fetching value of define "__AVX512F__" : (undefined) 00:02:11.633 Fetching value of define "__AVX512VL__" : (undefined) 00:02:11.633 Fetching value of define "__PCLMUL__" : 1 00:02:11.633 Fetching value of define "__RDRND__" : 1 00:02:11.633 Fetching value of define "__RDSEED__" : 1 00:02:11.633 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:11.633 Fetching value of define "__znver1__" : (undefined) 00:02:11.633 Fetching value of define "__znver2__" : (undefined) 00:02:11.633 Fetching value of define "__znver3__" : (undefined) 00:02:11.633 Fetching value of define "__znver4__" : (undefined) 00:02:11.633 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:11.633 Message: lib/log: Defining dependency "log" 00:02:11.633 Message: lib/kvargs: Defining dependency "kvargs" 00:02:11.633 Message: lib/telemetry: Defining dependency "telemetry" 00:02:11.633 Checking for function "getentropy" : NO 00:02:11.633 Message: lib/eal: Defining dependency "eal" 00:02:11.633 Message: lib/ring: Defining dependency "ring" 00:02:11.633 Message: lib/rcu: Defining dependency "rcu" 00:02:11.633 Message: lib/mempool: Defining dependency "mempool" 00:02:11.633 Message: lib/mbuf: Defining dependency "mbuf" 00:02:11.633 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:11.633 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:11.633 Compiler for C supports arguments -mpclmul: YES 00:02:11.633 Compiler for C supports arguments -maes: YES 00:02:11.633 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:11.633 Compiler for C supports arguments -mavx512bw: YES 00:02:11.633 Compiler for C supports arguments -mavx512dq: YES 00:02:11.633 Compiler for C supports arguments -mavx512vl: YES 00:02:11.633 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:11.633 Compiler for C supports arguments -mavx2: YES 00:02:11.633 Compiler for C supports arguments -mavx: YES 00:02:11.633 Message: lib/net: Defining dependency "net" 00:02:11.633 Message: lib/meter: Defining dependency "meter" 00:02:11.633 Message: lib/ethdev: Defining dependency "ethdev" 00:02:11.633 Message: lib/pci: Defining dependency "pci" 00:02:11.633 Message: lib/cmdline: Defining dependency "cmdline" 00:02:11.633 Message: lib/hash: Defining dependency "hash" 00:02:11.633 Message: lib/timer: Defining dependency "timer" 00:02:11.633 Message: lib/compressdev: Defining dependency "compressdev" 00:02:11.633 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:11.633 Message: lib/dmadev: Defining dependency "dmadev" 00:02:11.633 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:11.633 Message: lib/power: Defining dependency "power" 00:02:11.633 Message: lib/reorder: Defining dependency "reorder" 00:02:11.633 Message: lib/security: Defining dependency "security" 00:02:11.633 Has header "linux/userfaultfd.h" : YES 00:02:11.633 Has header "linux/vduse.h" : YES 00:02:11.633 Message: lib/vhost: Defining dependency "vhost" 00:02:11.633 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:11.633 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:11.633 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:11.633 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:11.633 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:11.633 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:11.634 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:11.634 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:11.634 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:11.634 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:11.634 Program doxygen found: YES (/usr/bin/doxygen) 00:02:11.634 Configuring doxy-api-html.conf using configuration 00:02:11.634 Configuring doxy-api-man.conf using configuration 00:02:11.634 Program mandb found: YES (/usr/bin/mandb) 00:02:11.634 Program sphinx-build found: NO 00:02:11.634 Configuring rte_build_config.h using configuration 00:02:11.634 Message: 00:02:11.634 ================= 00:02:11.634 Applications Enabled 00:02:11.634 ================= 00:02:11.634 00:02:11.634 apps: 00:02:11.634 00:02:11.634 00:02:11.634 Message: 00:02:11.634 ================= 00:02:11.634 Libraries Enabled 00:02:11.634 ================= 00:02:11.634 00:02:11.634 libs: 00:02:11.634 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:11.634 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:11.634 cryptodev, dmadev, power, reorder, security, vhost, 00:02:11.634 00:02:11.634 Message: 00:02:11.634 =============== 00:02:11.634 Drivers Enabled 00:02:11.634 =============== 00:02:11.634 00:02:11.634 common: 00:02:11.634 00:02:11.634 bus: 00:02:11.634 pci, vdev, 00:02:11.634 mempool: 00:02:11.634 ring, 00:02:11.634 dma: 00:02:11.634 00:02:11.634 net: 00:02:11.634 00:02:11.634 crypto: 00:02:11.634 00:02:11.634 compress: 00:02:11.634 00:02:11.634 vdpa: 00:02:11.634 00:02:11.634 00:02:11.634 Message: 00:02:11.634 ================= 00:02:11.634 Content Skipped 00:02:11.634 ================= 00:02:11.634 00:02:11.634 apps: 00:02:11.634 dumpcap: explicitly disabled via build config 00:02:11.634 graph: explicitly disabled via build config 00:02:11.634 pdump: explicitly disabled via build config 00:02:11.634 proc-info: explicitly disabled via build config 00:02:11.634 test-acl: explicitly disabled via build config 00:02:11.634 test-bbdev: explicitly disabled via build config 00:02:11.634 test-cmdline: explicitly disabled via build config 00:02:11.634 test-compress-perf: explicitly disabled via build config 00:02:11.634 test-crypto-perf: explicitly disabled via build config 00:02:11.634 test-dma-perf: explicitly disabled via build config 00:02:11.634 test-eventdev: explicitly disabled via build config 00:02:11.634 test-fib: explicitly disabled via build config 00:02:11.634 test-flow-perf: explicitly disabled via build config 00:02:11.634 test-gpudev: explicitly disabled via build config 00:02:11.634 test-mldev: explicitly disabled via build config 00:02:11.634 test-pipeline: explicitly disabled via build config 00:02:11.634 test-pmd: explicitly disabled via build config 00:02:11.634 test-regex: explicitly disabled via build config 00:02:11.634 test-sad: explicitly disabled via build config 00:02:11.634 test-security-perf: explicitly disabled via build config 00:02:11.634 00:02:11.634 libs: 00:02:11.634 argparse: explicitly disabled via build config 00:02:11.634 metrics: explicitly disabled via build config 00:02:11.634 acl: explicitly disabled via build config 00:02:11.634 bbdev: explicitly disabled via build config 00:02:11.634 bitratestats: explicitly disabled via build config 00:02:11.634 bpf: explicitly disabled via build config 00:02:11.634 cfgfile: explicitly disabled via build config 00:02:11.634 distributor: explicitly disabled via build config 00:02:11.634 efd: explicitly disabled via build config 00:02:11.634 eventdev: explicitly disabled via build config 00:02:11.634 dispatcher: explicitly disabled via build config 00:02:11.634 gpudev: explicitly disabled via build config 00:02:11.634 gro: explicitly disabled via build config 00:02:11.634 gso: explicitly disabled via build config 00:02:11.634 ip_frag: explicitly disabled via build config 00:02:11.634 jobstats: explicitly disabled via build config 00:02:11.634 latencystats: explicitly disabled via build config 00:02:11.634 lpm: explicitly disabled via build config 00:02:11.634 member: explicitly disabled via build config 00:02:11.634 pcapng: explicitly disabled via build config 00:02:11.634 rawdev: explicitly disabled via build config 00:02:11.634 regexdev: explicitly disabled via build config 00:02:11.634 mldev: explicitly disabled via build config 00:02:11.634 rib: explicitly disabled via build config 00:02:11.634 sched: explicitly disabled via build config 00:02:11.634 stack: explicitly disabled via build config 00:02:11.634 ipsec: explicitly disabled via build config 00:02:11.634 pdcp: explicitly disabled via build config 00:02:11.634 fib: explicitly disabled via build config 00:02:11.634 port: explicitly disabled via build config 00:02:11.634 pdump: explicitly disabled via build config 00:02:11.634 table: explicitly disabled via build config 00:02:11.634 pipeline: explicitly disabled via build config 00:02:11.634 graph: explicitly disabled via build config 00:02:11.634 node: explicitly disabled via build config 00:02:11.634 00:02:11.634 drivers: 00:02:11.634 common/cpt: not in enabled drivers build config 00:02:11.634 common/dpaax: not in enabled drivers build config 00:02:11.634 common/iavf: not in enabled drivers build config 00:02:11.634 common/idpf: not in enabled drivers build config 00:02:11.634 common/ionic: not in enabled drivers build config 00:02:11.634 common/mvep: not in enabled drivers build config 00:02:11.634 common/octeontx: not in enabled drivers build config 00:02:11.634 bus/auxiliary: not in enabled drivers build config 00:02:11.634 bus/cdx: not in enabled drivers build config 00:02:11.634 bus/dpaa: not in enabled drivers build config 00:02:11.634 bus/fslmc: not in enabled drivers build config 00:02:11.634 bus/ifpga: not in enabled drivers build config 00:02:11.634 bus/platform: not in enabled drivers build config 00:02:11.634 bus/uacce: not in enabled drivers build config 00:02:11.634 bus/vmbus: not in enabled drivers build config 00:02:11.634 common/cnxk: not in enabled drivers build config 00:02:11.634 common/mlx5: not in enabled drivers build config 00:02:11.634 common/nfp: not in enabled drivers build config 00:02:11.634 common/nitrox: not in enabled drivers build config 00:02:11.634 common/qat: not in enabled drivers build config 00:02:11.634 common/sfc_efx: not in enabled drivers build config 00:02:11.634 mempool/bucket: not in enabled drivers build config 00:02:11.634 mempool/cnxk: not in enabled drivers build config 00:02:11.634 mempool/dpaa: not in enabled drivers build config 00:02:11.634 mempool/dpaa2: not in enabled drivers build config 00:02:11.634 mempool/octeontx: not in enabled drivers build config 00:02:11.634 mempool/stack: not in enabled drivers build config 00:02:11.634 dma/cnxk: not in enabled drivers build config 00:02:11.634 dma/dpaa: not in enabled drivers build config 00:02:11.634 dma/dpaa2: not in enabled drivers build config 00:02:11.634 dma/hisilicon: not in enabled drivers build config 00:02:11.634 dma/idxd: not in enabled drivers build config 00:02:11.634 dma/ioat: not in enabled drivers build config 00:02:11.634 dma/skeleton: not in enabled drivers build config 00:02:11.634 net/af_packet: not in enabled drivers build config 00:02:11.634 net/af_xdp: not in enabled drivers build config 00:02:11.634 net/ark: not in enabled drivers build config 00:02:11.634 net/atlantic: not in enabled drivers build config 00:02:11.634 net/avp: not in enabled drivers build config 00:02:11.634 net/axgbe: not in enabled drivers build config 00:02:11.634 net/bnx2x: not in enabled drivers build config 00:02:11.634 net/bnxt: not in enabled drivers build config 00:02:11.634 net/bonding: not in enabled drivers build config 00:02:11.634 net/cnxk: not in enabled drivers build config 00:02:11.634 net/cpfl: not in enabled drivers build config 00:02:11.634 net/cxgbe: not in enabled drivers build config 00:02:11.634 net/dpaa: not in enabled drivers build config 00:02:11.634 net/dpaa2: not in enabled drivers build config 00:02:11.634 net/e1000: not in enabled drivers build config 00:02:11.634 net/ena: not in enabled drivers build config 00:02:11.634 net/enetc: not in enabled drivers build config 00:02:11.634 net/enetfec: not in enabled drivers build config 00:02:11.634 net/enic: not in enabled drivers build config 00:02:11.634 net/failsafe: not in enabled drivers build config 00:02:11.634 net/fm10k: not in enabled drivers build config 00:02:11.634 net/gve: not in enabled drivers build config 00:02:11.634 net/hinic: not in enabled drivers build config 00:02:11.634 net/hns3: not in enabled drivers build config 00:02:11.634 net/i40e: not in enabled drivers build config 00:02:11.634 net/iavf: not in enabled drivers build config 00:02:11.634 net/ice: not in enabled drivers build config 00:02:11.634 net/idpf: not in enabled drivers build config 00:02:11.634 net/igc: not in enabled drivers build config 00:02:11.634 net/ionic: not in enabled drivers build config 00:02:11.634 net/ipn3ke: not in enabled drivers build config 00:02:11.634 net/ixgbe: not in enabled drivers build config 00:02:11.634 net/mana: not in enabled drivers build config 00:02:11.634 net/memif: not in enabled drivers build config 00:02:11.634 net/mlx4: not in enabled drivers build config 00:02:11.634 net/mlx5: not in enabled drivers build config 00:02:11.634 net/mvneta: not in enabled drivers build config 00:02:11.634 net/mvpp2: not in enabled drivers build config 00:02:11.634 net/netvsc: not in enabled drivers build config 00:02:11.634 net/nfb: not in enabled drivers build config 00:02:11.634 net/nfp: not in enabled drivers build config 00:02:11.634 net/ngbe: not in enabled drivers build config 00:02:11.634 net/null: not in enabled drivers build config 00:02:11.634 net/octeontx: not in enabled drivers build config 00:02:11.634 net/octeon_ep: not in enabled drivers build config 00:02:11.634 net/pcap: not in enabled drivers build config 00:02:11.634 net/pfe: not in enabled drivers build config 00:02:11.634 net/qede: not in enabled drivers build config 00:02:11.634 net/ring: not in enabled drivers build config 00:02:11.634 net/sfc: not in enabled drivers build config 00:02:11.634 net/softnic: not in enabled drivers build config 00:02:11.635 net/tap: not in enabled drivers build config 00:02:11.635 net/thunderx: not in enabled drivers build config 00:02:11.635 net/txgbe: not in enabled drivers build config 00:02:11.635 net/vdev_netvsc: not in enabled drivers build config 00:02:11.635 net/vhost: not in enabled drivers build config 00:02:11.635 net/virtio: not in enabled drivers build config 00:02:11.635 net/vmxnet3: not in enabled drivers build config 00:02:11.635 raw/*: missing internal dependency, "rawdev" 00:02:11.635 crypto/armv8: not in enabled drivers build config 00:02:11.635 crypto/bcmfs: not in enabled drivers build config 00:02:11.635 crypto/caam_jr: not in enabled drivers build config 00:02:11.635 crypto/ccp: not in enabled drivers build config 00:02:11.635 crypto/cnxk: not in enabled drivers build config 00:02:11.635 crypto/dpaa_sec: not in enabled drivers build config 00:02:11.635 crypto/dpaa2_sec: not in enabled drivers build config 00:02:11.635 crypto/ipsec_mb: not in enabled drivers build config 00:02:11.635 crypto/mlx5: not in enabled drivers build config 00:02:11.635 crypto/mvsam: not in enabled drivers build config 00:02:11.635 crypto/nitrox: not in enabled drivers build config 00:02:11.635 crypto/null: not in enabled drivers build config 00:02:11.635 crypto/octeontx: not in enabled drivers build config 00:02:11.635 crypto/openssl: not in enabled drivers build config 00:02:11.635 crypto/scheduler: not in enabled drivers build config 00:02:11.635 crypto/uadk: not in enabled drivers build config 00:02:11.635 crypto/virtio: not in enabled drivers build config 00:02:11.635 compress/isal: not in enabled drivers build config 00:02:11.635 compress/mlx5: not in enabled drivers build config 00:02:11.635 compress/nitrox: not in enabled drivers build config 00:02:11.635 compress/octeontx: not in enabled drivers build config 00:02:11.635 compress/zlib: not in enabled drivers build config 00:02:11.635 regex/*: missing internal dependency, "regexdev" 00:02:11.635 ml/*: missing internal dependency, "mldev" 00:02:11.635 vdpa/ifc: not in enabled drivers build config 00:02:11.635 vdpa/mlx5: not in enabled drivers build config 00:02:11.635 vdpa/nfp: not in enabled drivers build config 00:02:11.635 vdpa/sfc: not in enabled drivers build config 00:02:11.635 event/*: missing internal dependency, "eventdev" 00:02:11.635 baseband/*: missing internal dependency, "bbdev" 00:02:11.635 gpu/*: missing internal dependency, "gpudev" 00:02:11.635 00:02:11.635 00:02:11.635 Build targets in project: 85 00:02:11.635 00:02:11.635 DPDK 24.03.0 00:02:11.635 00:02:11.635 User defined options 00:02:11.635 buildtype : debug 00:02:11.635 default_library : shared 00:02:11.635 libdir : lib 00:02:11.635 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:11.635 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:11.635 c_link_args : 00:02:11.635 cpu_instruction_set: native 00:02:11.635 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:11.635 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:11.635 enable_docs : false 00:02:11.635 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:11.635 enable_kmods : false 00:02:11.635 max_lcores : 128 00:02:11.635 tests : false 00:02:11.635 00:02:11.635 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:11.893 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:11.893 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:11.893 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:11.893 [3/268] Linking static target lib/librte_kvargs.a 00:02:11.893 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:11.893 [5/268] Linking static target lib/librte_log.a 00:02:11.894 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:12.460 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.461 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:12.720 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:12.720 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:12.720 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:12.720 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:12.720 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:12.720 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:12.720 [15/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:12.979 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:12.979 [17/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:12.979 [18/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.979 [19/268] Linking static target lib/librte_telemetry.a 00:02:12.979 [20/268] Linking target lib/librte_log.so.24.1 00:02:13.237 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:13.237 [22/268] Linking target lib/librte_kvargs.so.24.1 00:02:13.495 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:13.495 [24/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:13.495 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:13.495 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:13.495 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:13.754 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:13.754 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:13.754 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:13.754 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:13.754 [32/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.012 [33/268] Linking target lib/librte_telemetry.so.24.1 00:02:14.012 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:14.012 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:14.012 [36/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:14.270 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:14.529 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:14.529 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:14.529 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:14.529 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:14.529 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:14.529 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:14.788 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:14.788 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:14.788 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:14.788 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:15.047 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:15.047 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:15.305 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:15.305 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:15.305 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:15.563 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:15.563 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:15.821 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:15.821 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:15.821 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:15.821 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:15.821 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:16.079 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:16.079 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:16.335 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:16.593 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:16.593 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:16.593 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:16.593 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:16.593 [67/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:16.593 [68/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:16.593 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:16.850 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:17.108 [71/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:17.108 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:17.375 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:17.375 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:17.375 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:17.375 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:17.648 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:17.648 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:17.648 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:17.648 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:17.648 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:17.648 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:17.905 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:18.471 [84/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:18.471 [85/268] Linking static target lib/librte_eal.a 00:02:18.471 [86/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:18.471 [87/268] Linking static target lib/librte_ring.a 00:02:18.471 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:18.471 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:18.728 [90/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:18.728 [91/268] Linking static target lib/librte_rcu.a 00:02:18.728 [92/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:18.728 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:18.728 [94/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:18.728 [95/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:18.728 [96/268] Linking static target lib/librte_mempool.a 00:02:18.984 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:18.984 [98/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.984 [99/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:18.984 [100/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:19.242 [101/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.500 [102/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:19.500 [103/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:19.500 [104/268] Linking static target lib/librte_mbuf.a 00:02:19.759 [105/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:19.759 [106/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:19.759 [107/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:19.759 [108/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:19.759 [109/268] Linking static target lib/librte_meter.a 00:02:19.760 [110/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:19.760 [111/268] Linking static target lib/librte_net.a 00:02:20.018 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:20.018 [113/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.277 [114/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.277 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:20.277 [116/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.536 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:20.536 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:20.794 [119/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.794 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:21.051 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:21.309 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:21.309 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:21.567 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:21.567 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:21.567 [126/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:21.567 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:21.567 [128/268] Linking static target lib/librte_pci.a 00:02:21.567 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:21.567 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:21.567 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:21.825 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:21.825 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:21.825 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:21.825 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:21.825 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:21.825 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:22.086 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:22.086 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:22.086 [140/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.086 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:22.086 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:22.086 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:22.086 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:22.086 [145/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:22.347 [146/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:22.347 [147/268] Linking static target lib/librte_ethdev.a 00:02:22.347 [148/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:22.347 [149/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:22.347 [150/268] Linking static target lib/librte_cmdline.a 00:02:22.914 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:22.914 [152/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:22.914 [153/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:22.914 [154/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:22.914 [155/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:22.914 [156/268] Linking static target lib/librte_timer.a 00:02:22.914 [157/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:22.914 [158/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:22.914 [159/268] Linking static target lib/librte_hash.a 00:02:23.481 [160/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:23.481 [161/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:23.481 [162/268] Linking static target lib/librte_compressdev.a 00:02:23.481 [163/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:23.481 [164/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.481 [165/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:23.739 [166/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:23.739 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:23.998 [168/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.998 [169/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:23.998 [170/268] Linking static target lib/librte_dmadev.a 00:02:24.256 [171/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:24.256 [172/268] Linking static target lib/librte_cryptodev.a 00:02:24.256 [173/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:24.256 [174/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:24.256 [175/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.256 [176/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:24.256 [177/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:24.514 [178/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.773 [179/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:24.773 [180/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:24.773 [181/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:24.773 [182/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:24.773 [183/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:25.031 [184/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:25.031 [185/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.031 [186/268] Linking static target lib/librte_power.a 00:02:25.289 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:25.289 [188/268] Linking static target lib/librte_reorder.a 00:02:25.547 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:25.547 [190/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:25.547 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:25.547 [192/268] Linking static target lib/librte_security.a 00:02:25.547 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:25.547 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.806 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:25.806 [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.065 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.065 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:26.323 [199/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.323 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:26.323 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:26.323 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:26.323 [203/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:26.582 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:26.582 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:26.840 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:26.840 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:26.840 [208/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:26.840 [209/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:26.840 [210/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:26.840 [211/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:27.098 [212/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:27.098 [213/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:27.098 [214/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:27.098 [215/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:27.098 [216/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:27.098 [217/268] Linking static target drivers/librte_bus_vdev.a 00:02:27.098 [218/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:27.098 [219/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:27.098 [220/268] Linking static target drivers/librte_bus_pci.a 00:02:27.098 [221/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:27.098 [222/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:27.357 [223/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.357 [224/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:27.357 [225/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:27.357 [226/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:27.357 [227/268] Linking static target drivers/librte_mempool_ring.a 00:02:27.615 [228/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.558 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:28.558 [230/268] Linking static target lib/librte_vhost.a 00:02:29.125 [231/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.125 [232/268] Linking target lib/librte_eal.so.24.1 00:02:29.383 [233/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:29.383 [234/268] Linking target lib/librte_meter.so.24.1 00:02:29.383 [235/268] Linking target lib/librte_timer.so.24.1 00:02:29.383 [236/268] Linking target lib/librte_pci.so.24.1 00:02:29.383 [237/268] Linking target lib/librte_dmadev.so.24.1 00:02:29.383 [238/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:29.383 [239/268] Linking target lib/librte_ring.so.24.1 00:02:29.642 [240/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:29.642 [241/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:29.642 [242/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:29.642 [243/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:29.642 [244/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:29.642 [245/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:29.642 [246/268] Linking target lib/librte_mempool.so.24.1 00:02:29.642 [247/268] Linking target lib/librte_rcu.so.24.1 00:02:29.642 [248/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.642 [249/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.901 [250/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:29.901 [251/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:29.901 [252/268] Linking target lib/librte_mbuf.so.24.1 00:02:29.901 [253/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:29.901 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:30.159 [255/268] Linking target lib/librte_compressdev.so.24.1 00:02:30.159 [256/268] Linking target lib/librte_reorder.so.24.1 00:02:30.159 [257/268] Linking target lib/librte_cryptodev.so.24.1 00:02:30.159 [258/268] Linking target lib/librte_net.so.24.1 00:02:30.159 [259/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:30.159 [260/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:30.418 [261/268] Linking target lib/librte_hash.so.24.1 00:02:30.418 [262/268] Linking target lib/librte_security.so.24.1 00:02:30.418 [263/268] Linking target lib/librte_cmdline.so.24.1 00:02:30.418 [264/268] Linking target lib/librte_ethdev.so.24.1 00:02:30.418 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:30.418 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:30.677 [267/268] Linking target lib/librte_power.so.24.1 00:02:30.677 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:30.677 INFO: autodetecting backend as ninja 00:02:30.677 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:31.612 CC lib/ut_mock/mock.o 00:02:31.612 CC lib/ut/ut.o 00:02:31.612 CC lib/log/log.o 00:02:31.612 CC lib/log/log_flags.o 00:02:31.612 CC lib/log/log_deprecated.o 00:02:31.870 LIB libspdk_ut.a 00:02:31.870 LIB libspdk_ut_mock.a 00:02:31.870 LIB libspdk_log.a 00:02:31.870 SO libspdk_ut.so.2.0 00:02:31.870 SO libspdk_ut_mock.so.6.0 00:02:31.870 SO libspdk_log.so.7.0 00:02:32.129 SYMLINK libspdk_ut.so 00:02:32.129 SYMLINK libspdk_ut_mock.so 00:02:32.129 SYMLINK libspdk_log.so 00:02:32.129 CC lib/dma/dma.o 00:02:32.129 CC lib/ioat/ioat.o 00:02:32.129 CXX lib/trace_parser/trace.o 00:02:32.129 CC lib/util/bit_array.o 00:02:32.129 CC lib/util/base64.o 00:02:32.129 CC lib/util/cpuset.o 00:02:32.129 CC lib/util/crc16.o 00:02:32.129 CC lib/util/crc32.o 00:02:32.129 CC lib/util/crc32c.o 00:02:32.387 CC lib/vfio_user/host/vfio_user_pci.o 00:02:32.387 CC lib/util/crc32_ieee.o 00:02:32.387 CC lib/util/crc64.o 00:02:32.387 CC lib/util/dif.o 00:02:32.387 CC lib/vfio_user/host/vfio_user.o 00:02:32.646 LIB libspdk_dma.a 00:02:32.646 CC lib/util/fd.o 00:02:32.646 CC lib/util/fd_group.o 00:02:32.646 SO libspdk_dma.so.4.0 00:02:32.646 LIB libspdk_ioat.a 00:02:32.646 CC lib/util/file.o 00:02:32.646 CC lib/util/hexlify.o 00:02:32.646 SO libspdk_ioat.so.7.0 00:02:32.646 SYMLINK libspdk_dma.so 00:02:32.646 CC lib/util/iov.o 00:02:32.646 CC lib/util/math.o 00:02:32.646 CC lib/util/net.o 00:02:32.646 SYMLINK libspdk_ioat.so 00:02:32.646 CC lib/util/pipe.o 00:02:32.646 LIB libspdk_vfio_user.a 00:02:32.646 CC lib/util/strerror_tls.o 00:02:32.646 CC lib/util/string.o 00:02:32.904 SO libspdk_vfio_user.so.5.0 00:02:32.904 CC lib/util/uuid.o 00:02:32.904 CC lib/util/xor.o 00:02:32.904 CC lib/util/zipf.o 00:02:32.904 SYMLINK libspdk_vfio_user.so 00:02:32.904 LIB libspdk_util.a 00:02:33.162 SO libspdk_util.so.10.0 00:02:33.421 SYMLINK libspdk_util.so 00:02:33.421 LIB libspdk_trace_parser.a 00:02:33.421 SO libspdk_trace_parser.so.5.0 00:02:33.421 SYMLINK libspdk_trace_parser.so 00:02:33.421 CC lib/conf/conf.o 00:02:33.421 CC lib/rdma_provider/common.o 00:02:33.421 CC lib/json/json_parse.o 00:02:33.421 CC lib/json/json_util.o 00:02:33.421 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:33.421 CC lib/vmd/led.o 00:02:33.421 CC lib/vmd/vmd.o 00:02:33.421 CC lib/env_dpdk/env.o 00:02:33.421 CC lib/idxd/idxd.o 00:02:33.421 CC lib/rdma_utils/rdma_utils.o 00:02:33.680 CC lib/env_dpdk/memory.o 00:02:33.680 CC lib/env_dpdk/pci.o 00:02:33.680 LIB libspdk_rdma_provider.a 00:02:33.680 LIB libspdk_conf.a 00:02:33.680 SO libspdk_rdma_provider.so.6.0 00:02:33.680 CC lib/idxd/idxd_user.o 00:02:33.680 SO libspdk_conf.so.6.0 00:02:33.680 CC lib/json/json_write.o 00:02:33.680 SYMLINK libspdk_rdma_provider.so 00:02:33.680 CC lib/idxd/idxd_kernel.o 00:02:33.680 LIB libspdk_rdma_utils.a 00:02:33.938 SYMLINK libspdk_conf.so 00:02:33.938 CC lib/env_dpdk/init.o 00:02:33.938 SO libspdk_rdma_utils.so.1.0 00:02:33.938 SYMLINK libspdk_rdma_utils.so 00:02:33.938 CC lib/env_dpdk/threads.o 00:02:33.938 CC lib/env_dpdk/pci_ioat.o 00:02:33.938 CC lib/env_dpdk/pci_virtio.o 00:02:33.938 LIB libspdk_idxd.a 00:02:33.938 CC lib/env_dpdk/pci_vmd.o 00:02:33.938 CC lib/env_dpdk/pci_idxd.o 00:02:33.938 SO libspdk_idxd.so.12.0 00:02:33.938 LIB libspdk_json.a 00:02:34.197 CC lib/env_dpdk/pci_event.o 00:02:34.197 SO libspdk_json.so.6.0 00:02:34.197 LIB libspdk_vmd.a 00:02:34.197 SYMLINK libspdk_idxd.so 00:02:34.197 CC lib/env_dpdk/sigbus_handler.o 00:02:34.197 CC lib/env_dpdk/pci_dpdk.o 00:02:34.197 SO libspdk_vmd.so.6.0 00:02:34.197 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:34.197 SYMLINK libspdk_json.so 00:02:34.197 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:34.197 SYMLINK libspdk_vmd.so 00:02:34.455 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:34.455 CC lib/jsonrpc/jsonrpc_server.o 00:02:34.455 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:34.455 CC lib/jsonrpc/jsonrpc_client.o 00:02:34.741 LIB libspdk_jsonrpc.a 00:02:34.741 SO libspdk_jsonrpc.so.6.0 00:02:34.741 SYMLINK libspdk_jsonrpc.so 00:02:35.006 LIB libspdk_env_dpdk.a 00:02:35.006 CC lib/rpc/rpc.o 00:02:35.006 SO libspdk_env_dpdk.so.15.0 00:02:35.264 LIB libspdk_rpc.a 00:02:35.265 SYMLINK libspdk_env_dpdk.so 00:02:35.265 SO libspdk_rpc.so.6.0 00:02:35.265 SYMLINK libspdk_rpc.so 00:02:35.523 CC lib/notify/notify.o 00:02:35.523 CC lib/notify/notify_rpc.o 00:02:35.523 CC lib/trace/trace.o 00:02:35.523 CC lib/keyring/keyring_rpc.o 00:02:35.523 CC lib/keyring/keyring.o 00:02:35.523 CC lib/trace/trace_rpc.o 00:02:35.523 CC lib/trace/trace_flags.o 00:02:35.781 LIB libspdk_notify.a 00:02:35.781 SO libspdk_notify.so.6.0 00:02:35.781 LIB libspdk_keyring.a 00:02:36.040 LIB libspdk_trace.a 00:02:36.040 SYMLINK libspdk_notify.so 00:02:36.040 SO libspdk_keyring.so.1.0 00:02:36.040 SO libspdk_trace.so.10.0 00:02:36.040 SYMLINK libspdk_keyring.so 00:02:36.040 SYMLINK libspdk_trace.so 00:02:36.299 CC lib/thread/thread.o 00:02:36.299 CC lib/thread/iobuf.o 00:02:36.299 CC lib/sock/sock.o 00:02:36.299 CC lib/sock/sock_rpc.o 00:02:36.866 LIB libspdk_sock.a 00:02:36.866 SO libspdk_sock.so.10.0 00:02:36.866 SYMLINK libspdk_sock.so 00:02:37.126 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:37.126 CC lib/nvme/nvme_ctrlr.o 00:02:37.126 CC lib/nvme/nvme_fabric.o 00:02:37.126 CC lib/nvme/nvme_ns_cmd.o 00:02:37.126 CC lib/nvme/nvme_ns.o 00:02:37.126 CC lib/nvme/nvme_pcie_common.o 00:02:37.126 CC lib/nvme/nvme_pcie.o 00:02:37.126 CC lib/nvme/nvme_qpair.o 00:02:37.384 CC lib/nvme/nvme.o 00:02:37.950 LIB libspdk_thread.a 00:02:37.950 SO libspdk_thread.so.10.1 00:02:37.950 SYMLINK libspdk_thread.so 00:02:37.950 CC lib/nvme/nvme_quirks.o 00:02:37.950 CC lib/nvme/nvme_transport.o 00:02:37.950 CC lib/nvme/nvme_discovery.o 00:02:38.208 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:38.208 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:38.208 CC lib/nvme/nvme_tcp.o 00:02:38.208 CC lib/nvme/nvme_opal.o 00:02:38.466 CC lib/accel/accel.o 00:02:38.466 CC lib/nvme/nvme_io_msg.o 00:02:38.724 CC lib/nvme/nvme_poll_group.o 00:02:38.724 CC lib/nvme/nvme_zns.o 00:02:38.724 CC lib/nvme/nvme_stubs.o 00:02:38.724 CC lib/nvme/nvme_auth.o 00:02:38.724 CC lib/accel/accel_rpc.o 00:02:38.724 CC lib/accel/accel_sw.o 00:02:39.289 CC lib/blob/blobstore.o 00:02:39.289 CC lib/blob/request.o 00:02:39.289 CC lib/blob/zeroes.o 00:02:39.289 CC lib/blob/blob_bs_dev.o 00:02:39.289 LIB libspdk_accel.a 00:02:39.289 SO libspdk_accel.so.16.0 00:02:39.289 CC lib/nvme/nvme_cuse.o 00:02:39.289 SYMLINK libspdk_accel.so 00:02:39.289 CC lib/nvme/nvme_rdma.o 00:02:39.547 CC lib/init/json_config.o 00:02:39.547 CC lib/virtio/virtio.o 00:02:39.547 CC lib/init/subsystem.o 00:02:39.547 CC lib/init/subsystem_rpc.o 00:02:39.547 CC lib/bdev/bdev.o 00:02:39.547 CC lib/bdev/bdev_rpc.o 00:02:39.805 CC lib/init/rpc.o 00:02:39.805 CC lib/virtio/virtio_vhost_user.o 00:02:39.805 CC lib/virtio/virtio_vfio_user.o 00:02:39.805 CC lib/bdev/bdev_zone.o 00:02:39.805 CC lib/virtio/virtio_pci.o 00:02:39.805 LIB libspdk_init.a 00:02:39.805 SO libspdk_init.so.5.0 00:02:40.063 CC lib/bdev/part.o 00:02:40.063 CC lib/bdev/scsi_nvme.o 00:02:40.063 SYMLINK libspdk_init.so 00:02:40.063 LIB libspdk_virtio.a 00:02:40.063 SO libspdk_virtio.so.7.0 00:02:40.063 CC lib/event/app.o 00:02:40.063 CC lib/event/log_rpc.o 00:02:40.063 CC lib/event/reactor.o 00:02:40.063 CC lib/event/app_rpc.o 00:02:40.063 SYMLINK libspdk_virtio.so 00:02:40.321 CC lib/event/scheduler_static.o 00:02:40.579 LIB libspdk_event.a 00:02:40.579 SO libspdk_event.so.14.0 00:02:40.837 LIB libspdk_nvme.a 00:02:40.837 SYMLINK libspdk_event.so 00:02:41.096 SO libspdk_nvme.so.13.1 00:02:41.355 SYMLINK libspdk_nvme.so 00:02:41.939 LIB libspdk_blob.a 00:02:41.939 SO libspdk_blob.so.11.0 00:02:42.197 LIB libspdk_bdev.a 00:02:42.197 SYMLINK libspdk_blob.so 00:02:42.197 SO libspdk_bdev.so.16.0 00:02:42.197 SYMLINK libspdk_bdev.so 00:02:42.456 CC lib/lvol/lvol.o 00:02:42.456 CC lib/blobfs/blobfs.o 00:02:42.456 CC lib/blobfs/tree.o 00:02:42.456 CC lib/nvmf/ctrlr.o 00:02:42.456 CC lib/nvmf/ctrlr_discovery.o 00:02:42.456 CC lib/nvmf/ctrlr_bdev.o 00:02:42.456 CC lib/scsi/dev.o 00:02:42.456 CC lib/ftl/ftl_core.o 00:02:42.456 CC lib/ublk/ublk.o 00:02:42.456 CC lib/nbd/nbd.o 00:02:42.456 CC lib/nbd/nbd_rpc.o 00:02:42.714 CC lib/scsi/lun.o 00:02:42.714 CC lib/scsi/port.o 00:02:42.973 CC lib/ftl/ftl_init.o 00:02:42.973 LIB libspdk_nbd.a 00:02:42.973 CC lib/scsi/scsi.o 00:02:42.973 SO libspdk_nbd.so.7.0 00:02:42.973 CC lib/nvmf/subsystem.o 00:02:42.973 SYMLINK libspdk_nbd.so 00:02:42.973 CC lib/nvmf/nvmf.o 00:02:42.973 CC lib/ublk/ublk_rpc.o 00:02:42.973 CC lib/ftl/ftl_layout.o 00:02:43.231 CC lib/ftl/ftl_debug.o 00:02:43.231 LIB libspdk_blobfs.a 00:02:43.231 CC lib/scsi/scsi_bdev.o 00:02:43.231 CC lib/nvmf/nvmf_rpc.o 00:02:43.231 SO libspdk_blobfs.so.10.0 00:02:43.231 SYMLINK libspdk_blobfs.so 00:02:43.231 CC lib/nvmf/transport.o 00:02:43.231 LIB libspdk_ublk.a 00:02:43.231 SO libspdk_ublk.so.3.0 00:02:43.231 CC lib/scsi/scsi_pr.o 00:02:43.490 SYMLINK libspdk_ublk.so 00:02:43.490 CC lib/scsi/scsi_rpc.o 00:02:43.490 LIB libspdk_lvol.a 00:02:43.490 CC lib/ftl/ftl_io.o 00:02:43.490 SO libspdk_lvol.so.10.0 00:02:43.490 SYMLINK libspdk_lvol.so 00:02:43.490 CC lib/nvmf/tcp.o 00:02:43.490 CC lib/scsi/task.o 00:02:43.490 CC lib/ftl/ftl_sb.o 00:02:43.748 CC lib/nvmf/stubs.o 00:02:43.748 CC lib/nvmf/mdns_server.o 00:02:43.748 LIB libspdk_scsi.a 00:02:43.748 CC lib/ftl/ftl_l2p.o 00:02:43.748 SO libspdk_scsi.so.9.0 00:02:44.006 CC lib/nvmf/rdma.o 00:02:44.006 SYMLINK libspdk_scsi.so 00:02:44.006 CC lib/nvmf/auth.o 00:02:44.006 CC lib/ftl/ftl_l2p_flat.o 00:02:44.006 CC lib/ftl/ftl_nv_cache.o 00:02:44.006 CC lib/iscsi/conn.o 00:02:44.006 CC lib/ftl/ftl_band.o 00:02:44.006 CC lib/vhost/vhost.o 00:02:44.265 CC lib/iscsi/init_grp.o 00:02:44.265 CC lib/iscsi/iscsi.o 00:02:44.265 CC lib/iscsi/md5.o 00:02:44.523 CC lib/ftl/ftl_band_ops.o 00:02:44.523 CC lib/iscsi/param.o 00:02:44.523 CC lib/ftl/ftl_writer.o 00:02:44.781 CC lib/iscsi/portal_grp.o 00:02:44.781 CC lib/iscsi/tgt_node.o 00:02:44.781 CC lib/iscsi/iscsi_subsystem.o 00:02:44.781 CC lib/iscsi/iscsi_rpc.o 00:02:44.781 CC lib/vhost/vhost_rpc.o 00:02:44.781 CC lib/iscsi/task.o 00:02:45.039 CC lib/ftl/ftl_rq.o 00:02:45.039 CC lib/ftl/ftl_reloc.o 00:02:45.039 CC lib/ftl/ftl_l2p_cache.o 00:02:45.039 CC lib/ftl/ftl_p2l.o 00:02:45.039 CC lib/ftl/mngt/ftl_mngt.o 00:02:45.297 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:45.298 CC lib/vhost/vhost_scsi.o 00:02:45.298 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:45.298 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:45.298 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:45.298 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:45.557 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:45.557 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:45.557 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:45.557 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:45.557 LIB libspdk_iscsi.a 00:02:45.557 SO libspdk_iscsi.so.8.0 00:02:45.815 CC lib/vhost/vhost_blk.o 00:02:45.815 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:45.815 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:45.815 CC lib/vhost/rte_vhost_user.o 00:02:45.815 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:45.815 CC lib/ftl/utils/ftl_conf.o 00:02:45.815 CC lib/ftl/utils/ftl_md.o 00:02:45.815 SYMLINK libspdk_iscsi.so 00:02:45.815 CC lib/ftl/utils/ftl_mempool.o 00:02:45.815 LIB libspdk_nvmf.a 00:02:45.815 CC lib/ftl/utils/ftl_bitmap.o 00:02:46.074 CC lib/ftl/utils/ftl_property.o 00:02:46.074 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:46.074 SO libspdk_nvmf.so.19.0 00:02:46.074 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:46.074 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:46.074 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:46.074 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:46.333 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:46.333 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:46.333 SYMLINK libspdk_nvmf.so 00:02:46.333 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:46.333 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:46.333 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:46.333 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:46.333 CC lib/ftl/base/ftl_base_dev.o 00:02:46.333 CC lib/ftl/base/ftl_base_bdev.o 00:02:46.333 CC lib/ftl/ftl_trace.o 00:02:46.591 LIB libspdk_ftl.a 00:02:46.850 LIB libspdk_vhost.a 00:02:46.850 SO libspdk_vhost.so.8.0 00:02:46.850 SO libspdk_ftl.so.9.0 00:02:47.109 SYMLINK libspdk_vhost.so 00:02:47.368 SYMLINK libspdk_ftl.so 00:02:47.627 CC module/env_dpdk/env_dpdk_rpc.o 00:02:47.627 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:47.627 CC module/blob/bdev/blob_bdev.o 00:02:47.627 CC module/keyring/file/keyring.o 00:02:47.627 CC module/accel/error/accel_error.o 00:02:47.886 CC module/sock/uring/uring.o 00:02:47.886 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:47.886 CC module/sock/posix/posix.o 00:02:47.886 CC module/keyring/linux/keyring.o 00:02:47.886 CC module/scheduler/gscheduler/gscheduler.o 00:02:47.886 LIB libspdk_env_dpdk_rpc.a 00:02:47.886 SO libspdk_env_dpdk_rpc.so.6.0 00:02:47.886 SYMLINK libspdk_env_dpdk_rpc.so 00:02:47.886 CC module/accel/error/accel_error_rpc.o 00:02:47.886 CC module/keyring/linux/keyring_rpc.o 00:02:47.886 CC module/keyring/file/keyring_rpc.o 00:02:47.886 LIB libspdk_scheduler_dpdk_governor.a 00:02:47.886 LIB libspdk_scheduler_gscheduler.a 00:02:47.886 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:47.886 SO libspdk_scheduler_gscheduler.so.4.0 00:02:47.886 LIB libspdk_scheduler_dynamic.a 00:02:47.886 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:48.144 SO libspdk_scheduler_dynamic.so.4.0 00:02:48.144 LIB libspdk_blob_bdev.a 00:02:48.144 SYMLINK libspdk_scheduler_gscheduler.so 00:02:48.144 LIB libspdk_keyring_linux.a 00:02:48.144 LIB libspdk_keyring_file.a 00:02:48.144 LIB libspdk_accel_error.a 00:02:48.144 SO libspdk_blob_bdev.so.11.0 00:02:48.144 SO libspdk_keyring_linux.so.1.0 00:02:48.144 SO libspdk_keyring_file.so.1.0 00:02:48.144 SYMLINK libspdk_scheduler_dynamic.so 00:02:48.144 SO libspdk_accel_error.so.2.0 00:02:48.144 CC module/accel/ioat/accel_ioat.o 00:02:48.144 SYMLINK libspdk_blob_bdev.so 00:02:48.144 CC module/accel/ioat/accel_ioat_rpc.o 00:02:48.144 SYMLINK libspdk_keyring_file.so 00:02:48.144 SYMLINK libspdk_accel_error.so 00:02:48.144 SYMLINK libspdk_keyring_linux.so 00:02:48.144 CC module/accel/dsa/accel_dsa.o 00:02:48.144 CC module/accel/dsa/accel_dsa_rpc.o 00:02:48.144 CC module/accel/iaa/accel_iaa.o 00:02:48.144 CC module/accel/iaa/accel_iaa_rpc.o 00:02:48.416 LIB libspdk_accel_ioat.a 00:02:48.416 SO libspdk_accel_ioat.so.6.0 00:02:48.416 CC module/blobfs/bdev/blobfs_bdev.o 00:02:48.416 CC module/bdev/delay/vbdev_delay.o 00:02:48.416 SYMLINK libspdk_accel_ioat.so 00:02:48.416 LIB libspdk_accel_iaa.a 00:02:48.416 CC module/bdev/error/vbdev_error.o 00:02:48.416 LIB libspdk_sock_uring.a 00:02:48.416 LIB libspdk_accel_dsa.a 00:02:48.416 SO libspdk_sock_uring.so.5.0 00:02:48.416 SO libspdk_accel_iaa.so.3.0 00:02:48.416 SO libspdk_accel_dsa.so.5.0 00:02:48.416 LIB libspdk_sock_posix.a 00:02:48.416 CC module/bdev/gpt/gpt.o 00:02:48.691 SYMLINK libspdk_sock_uring.so 00:02:48.691 SO libspdk_sock_posix.so.6.0 00:02:48.691 CC module/bdev/lvol/vbdev_lvol.o 00:02:48.691 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:48.691 SYMLINK libspdk_accel_dsa.so 00:02:48.691 SYMLINK libspdk_accel_iaa.so 00:02:48.691 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:48.691 CC module/bdev/malloc/bdev_malloc.o 00:02:48.691 SYMLINK libspdk_sock_posix.so 00:02:48.691 CC module/bdev/gpt/vbdev_gpt.o 00:02:48.691 CC module/bdev/error/vbdev_error_rpc.o 00:02:48.691 LIB libspdk_blobfs_bdev.a 00:02:48.691 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:48.691 LIB libspdk_bdev_delay.a 00:02:48.691 SO libspdk_blobfs_bdev.so.6.0 00:02:48.691 CC module/bdev/null/bdev_null.o 00:02:48.949 CC module/bdev/nvme/bdev_nvme.o 00:02:48.949 SO libspdk_bdev_delay.so.6.0 00:02:48.949 CC module/bdev/passthru/vbdev_passthru.o 00:02:48.949 SYMLINK libspdk_blobfs_bdev.so 00:02:48.949 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:48.949 LIB libspdk_bdev_error.a 00:02:48.949 SYMLINK libspdk_bdev_delay.so 00:02:48.949 SO libspdk_bdev_error.so.6.0 00:02:48.949 SYMLINK libspdk_bdev_error.so 00:02:48.949 LIB libspdk_bdev_malloc.a 00:02:48.949 LIB libspdk_bdev_gpt.a 00:02:48.949 SO libspdk_bdev_malloc.so.6.0 00:02:48.949 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:49.207 SO libspdk_bdev_gpt.so.6.0 00:02:49.207 CC module/bdev/null/bdev_null_rpc.o 00:02:49.207 CC module/bdev/split/vbdev_split.o 00:02:49.207 CC module/bdev/raid/bdev_raid.o 00:02:49.207 CC module/bdev/split/vbdev_split_rpc.o 00:02:49.207 SYMLINK libspdk_bdev_malloc.so 00:02:49.207 SYMLINK libspdk_bdev_gpt.so 00:02:49.207 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:49.207 LIB libspdk_bdev_passthru.a 00:02:49.207 SO libspdk_bdev_passthru.so.6.0 00:02:49.207 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:49.207 LIB libspdk_bdev_null.a 00:02:49.207 SO libspdk_bdev_null.so.6.0 00:02:49.207 SYMLINK libspdk_bdev_passthru.so 00:02:49.207 CC module/bdev/uring/bdev_uring.o 00:02:49.465 LIB libspdk_bdev_split.a 00:02:49.465 SYMLINK libspdk_bdev_null.so 00:02:49.465 SO libspdk_bdev_split.so.6.0 00:02:49.465 LIB libspdk_bdev_lvol.a 00:02:49.465 SYMLINK libspdk_bdev_split.so 00:02:49.465 CC module/bdev/aio/bdev_aio.o 00:02:49.465 SO libspdk_bdev_lvol.so.6.0 00:02:49.465 CC module/bdev/ftl/bdev_ftl.o 00:02:49.465 CC module/bdev/iscsi/bdev_iscsi.o 00:02:49.465 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:49.465 SYMLINK libspdk_bdev_lvol.so 00:02:49.465 CC module/bdev/raid/bdev_raid_rpc.o 00:02:49.723 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:49.723 CC module/bdev/uring/bdev_uring_rpc.o 00:02:49.723 LIB libspdk_bdev_zone_block.a 00:02:49.723 SO libspdk_bdev_zone_block.so.6.0 00:02:49.723 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:49.723 CC module/bdev/raid/bdev_raid_sb.o 00:02:49.723 CC module/bdev/aio/bdev_aio_rpc.o 00:02:49.723 CC module/bdev/raid/raid0.o 00:02:49.723 SYMLINK libspdk_bdev_zone_block.so 00:02:49.723 CC module/bdev/raid/raid1.o 00:02:49.981 LIB libspdk_bdev_uring.a 00:02:49.981 SO libspdk_bdev_uring.so.6.0 00:02:49.981 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:49.981 LIB libspdk_bdev_aio.a 00:02:49.981 SYMLINK libspdk_bdev_uring.so 00:02:49.981 CC module/bdev/nvme/nvme_rpc.o 00:02:49.981 LIB libspdk_bdev_ftl.a 00:02:49.981 SO libspdk_bdev_aio.so.6.0 00:02:49.981 SO libspdk_bdev_ftl.so.6.0 00:02:49.981 CC module/bdev/nvme/bdev_mdns_client.o 00:02:49.981 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:49.981 SYMLINK libspdk_bdev_aio.so 00:02:49.981 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:49.981 CC module/bdev/raid/concat.o 00:02:49.981 SYMLINK libspdk_bdev_ftl.so 00:02:49.981 CC module/bdev/nvme/vbdev_opal.o 00:02:50.239 LIB libspdk_bdev_iscsi.a 00:02:50.239 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:50.239 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:50.239 SO libspdk_bdev_iscsi.so.6.0 00:02:50.239 SYMLINK libspdk_bdev_iscsi.so 00:02:50.239 LIB libspdk_bdev_virtio.a 00:02:50.239 LIB libspdk_bdev_raid.a 00:02:50.497 SO libspdk_bdev_virtio.so.6.0 00:02:50.497 SO libspdk_bdev_raid.so.6.0 00:02:50.498 SYMLINK libspdk_bdev_virtio.so 00:02:50.498 SYMLINK libspdk_bdev_raid.so 00:02:51.065 LIB libspdk_bdev_nvme.a 00:02:51.065 SO libspdk_bdev_nvme.so.7.0 00:02:51.323 SYMLINK libspdk_bdev_nvme.so 00:02:51.581 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:51.581 CC module/event/subsystems/vmd/vmd.o 00:02:51.581 CC module/event/subsystems/sock/sock.o 00:02:51.581 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:51.581 CC module/event/subsystems/scheduler/scheduler.o 00:02:51.581 CC module/event/subsystems/iobuf/iobuf.o 00:02:51.581 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:51.581 CC module/event/subsystems/keyring/keyring.o 00:02:51.840 LIB libspdk_event_scheduler.a 00:02:51.840 LIB libspdk_event_vhost_blk.a 00:02:51.840 LIB libspdk_event_vmd.a 00:02:51.840 LIB libspdk_event_sock.a 00:02:51.840 LIB libspdk_event_keyring.a 00:02:51.840 LIB libspdk_event_iobuf.a 00:02:51.840 SO libspdk_event_vhost_blk.so.3.0 00:02:51.840 SO libspdk_event_sock.so.5.0 00:02:51.840 SO libspdk_event_scheduler.so.4.0 00:02:51.840 SO libspdk_event_keyring.so.1.0 00:02:51.840 SO libspdk_event_vmd.so.6.0 00:02:51.840 SO libspdk_event_iobuf.so.3.0 00:02:51.840 SYMLINK libspdk_event_vhost_blk.so 00:02:51.840 SYMLINK libspdk_event_keyring.so 00:02:51.840 SYMLINK libspdk_event_sock.so 00:02:51.840 SYMLINK libspdk_event_scheduler.so 00:02:52.100 SYMLINK libspdk_event_vmd.so 00:02:52.100 SYMLINK libspdk_event_iobuf.so 00:02:52.360 CC module/event/subsystems/accel/accel.o 00:02:52.360 LIB libspdk_event_accel.a 00:02:52.360 SO libspdk_event_accel.so.6.0 00:02:52.619 SYMLINK libspdk_event_accel.so 00:02:52.878 CC module/event/subsystems/bdev/bdev.o 00:02:53.135 LIB libspdk_event_bdev.a 00:02:53.135 SO libspdk_event_bdev.so.6.0 00:02:53.135 SYMLINK libspdk_event_bdev.so 00:02:53.393 CC module/event/subsystems/ublk/ublk.o 00:02:53.393 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:53.393 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:53.393 CC module/event/subsystems/scsi/scsi.o 00:02:53.393 CC module/event/subsystems/nbd/nbd.o 00:02:53.650 LIB libspdk_event_ublk.a 00:02:53.650 LIB libspdk_event_nbd.a 00:02:53.650 LIB libspdk_event_scsi.a 00:02:53.650 SO libspdk_event_ublk.so.3.0 00:02:53.650 SO libspdk_event_nbd.so.6.0 00:02:53.650 SO libspdk_event_scsi.so.6.0 00:02:53.650 SYMLINK libspdk_event_ublk.so 00:02:53.650 SYMLINK libspdk_event_nbd.so 00:02:53.650 LIB libspdk_event_nvmf.a 00:02:53.650 SYMLINK libspdk_event_scsi.so 00:02:53.650 SO libspdk_event_nvmf.so.6.0 00:02:53.650 SYMLINK libspdk_event_nvmf.so 00:02:53.908 CC module/event/subsystems/iscsi/iscsi.o 00:02:53.908 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:54.165 LIB libspdk_event_vhost_scsi.a 00:02:54.165 LIB libspdk_event_iscsi.a 00:02:54.165 SO libspdk_event_vhost_scsi.so.3.0 00:02:54.165 SO libspdk_event_iscsi.so.6.0 00:02:54.165 SYMLINK libspdk_event_iscsi.so 00:02:54.165 SYMLINK libspdk_event_vhost_scsi.so 00:02:54.422 SO libspdk.so.6.0 00:02:54.422 SYMLINK libspdk.so 00:02:54.422 CXX app/trace/trace.o 00:02:54.422 CC app/trace_record/trace_record.o 00:02:54.678 CC app/spdk_nvme_perf/perf.o 00:02:54.678 CC app/spdk_nvme_identify/identify.o 00:02:54.678 CC app/spdk_lspci/spdk_lspci.o 00:02:54.678 CC app/nvmf_tgt/nvmf_main.o 00:02:54.678 CC app/iscsi_tgt/iscsi_tgt.o 00:02:54.678 CC app/spdk_tgt/spdk_tgt.o 00:02:54.678 CC examples/util/zipf/zipf.o 00:02:54.678 CC test/thread/poller_perf/poller_perf.o 00:02:54.678 LINK spdk_lspci 00:02:54.678 LINK nvmf_tgt 00:02:54.935 LINK zipf 00:02:54.935 LINK poller_perf 00:02:54.935 LINK spdk_trace_record 00:02:54.935 LINK iscsi_tgt 00:02:54.935 LINK spdk_tgt 00:02:54.935 CC app/spdk_nvme_discover/discovery_aer.o 00:02:54.935 LINK spdk_trace 00:02:55.192 CC app/spdk_top/spdk_top.o 00:02:55.192 CC examples/ioat/perf/perf.o 00:02:55.192 CC examples/ioat/verify/verify.o 00:02:55.192 CC app/spdk_dd/spdk_dd.o 00:02:55.192 LINK spdk_nvme_discover 00:02:55.192 CC test/dma/test_dma/test_dma.o 00:02:55.192 CC test/app/bdev_svc/bdev_svc.o 00:02:55.450 LINK ioat_perf 00:02:55.450 LINK spdk_nvme_identify 00:02:55.450 CC app/fio/nvme/fio_plugin.o 00:02:55.450 LINK verify 00:02:55.450 LINK spdk_nvme_perf 00:02:55.450 LINK bdev_svc 00:02:55.450 CC app/vhost/vhost.o 00:02:55.708 LINK test_dma 00:02:55.708 LINK spdk_dd 00:02:55.708 TEST_HEADER include/spdk/accel.h 00:02:55.708 TEST_HEADER include/spdk/accel_module.h 00:02:55.708 TEST_HEADER include/spdk/assert.h 00:02:55.708 TEST_HEADER include/spdk/barrier.h 00:02:55.708 TEST_HEADER include/spdk/base64.h 00:02:55.708 TEST_HEADER include/spdk/bdev.h 00:02:55.708 TEST_HEADER include/spdk/bdev_module.h 00:02:55.708 TEST_HEADER include/spdk/bdev_zone.h 00:02:55.708 TEST_HEADER include/spdk/bit_array.h 00:02:55.708 TEST_HEADER include/spdk/bit_pool.h 00:02:55.708 TEST_HEADER include/spdk/blob_bdev.h 00:02:55.708 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:55.708 TEST_HEADER include/spdk/blobfs.h 00:02:55.708 TEST_HEADER include/spdk/blob.h 00:02:55.708 TEST_HEADER include/spdk/conf.h 00:02:55.708 TEST_HEADER include/spdk/config.h 00:02:55.708 TEST_HEADER include/spdk/cpuset.h 00:02:55.708 TEST_HEADER include/spdk/crc16.h 00:02:55.708 TEST_HEADER include/spdk/crc32.h 00:02:55.708 TEST_HEADER include/spdk/crc64.h 00:02:55.708 TEST_HEADER include/spdk/dif.h 00:02:55.708 TEST_HEADER include/spdk/dma.h 00:02:55.708 TEST_HEADER include/spdk/endian.h 00:02:55.708 TEST_HEADER include/spdk/env_dpdk.h 00:02:55.708 TEST_HEADER include/spdk/env.h 00:02:55.708 TEST_HEADER include/spdk/event.h 00:02:55.708 TEST_HEADER include/spdk/fd_group.h 00:02:55.708 TEST_HEADER include/spdk/fd.h 00:02:55.708 TEST_HEADER include/spdk/file.h 00:02:55.708 TEST_HEADER include/spdk/ftl.h 00:02:55.708 CC examples/vmd/lsvmd/lsvmd.o 00:02:55.708 LINK vhost 00:02:55.708 TEST_HEADER include/spdk/gpt_spec.h 00:02:55.708 TEST_HEADER include/spdk/hexlify.h 00:02:55.708 TEST_HEADER include/spdk/histogram_data.h 00:02:55.708 TEST_HEADER include/spdk/idxd.h 00:02:55.708 TEST_HEADER include/spdk/idxd_spec.h 00:02:55.708 TEST_HEADER include/spdk/init.h 00:02:55.708 TEST_HEADER include/spdk/ioat.h 00:02:55.708 TEST_HEADER include/spdk/ioat_spec.h 00:02:55.708 TEST_HEADER include/spdk/iscsi_spec.h 00:02:55.708 TEST_HEADER include/spdk/json.h 00:02:55.708 TEST_HEADER include/spdk/jsonrpc.h 00:02:55.708 TEST_HEADER include/spdk/keyring.h 00:02:55.708 TEST_HEADER include/spdk/keyring_module.h 00:02:55.708 TEST_HEADER include/spdk/likely.h 00:02:55.708 TEST_HEADER include/spdk/log.h 00:02:55.708 TEST_HEADER include/spdk/lvol.h 00:02:55.708 TEST_HEADER include/spdk/memory.h 00:02:55.708 TEST_HEADER include/spdk/mmio.h 00:02:55.708 TEST_HEADER include/spdk/nbd.h 00:02:55.708 TEST_HEADER include/spdk/net.h 00:02:55.708 TEST_HEADER include/spdk/notify.h 00:02:55.708 TEST_HEADER include/spdk/nvme.h 00:02:55.708 TEST_HEADER include/spdk/nvme_intel.h 00:02:55.708 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:55.708 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:55.708 TEST_HEADER include/spdk/nvme_spec.h 00:02:55.708 TEST_HEADER include/spdk/nvme_zns.h 00:02:55.709 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:55.709 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:55.709 TEST_HEADER include/spdk/nvmf.h 00:02:55.709 TEST_HEADER include/spdk/nvmf_spec.h 00:02:55.709 TEST_HEADER include/spdk/nvmf_transport.h 00:02:55.709 TEST_HEADER include/spdk/opal.h 00:02:55.709 TEST_HEADER include/spdk/opal_spec.h 00:02:55.709 CC test/event/event_perf/event_perf.o 00:02:55.709 TEST_HEADER include/spdk/pci_ids.h 00:02:55.709 TEST_HEADER include/spdk/pipe.h 00:02:55.966 TEST_HEADER include/spdk/queue.h 00:02:55.966 TEST_HEADER include/spdk/reduce.h 00:02:55.966 TEST_HEADER include/spdk/rpc.h 00:02:55.966 CC test/env/mem_callbacks/mem_callbacks.o 00:02:55.966 TEST_HEADER include/spdk/scheduler.h 00:02:55.966 TEST_HEADER include/spdk/scsi.h 00:02:55.966 TEST_HEADER include/spdk/scsi_spec.h 00:02:55.966 TEST_HEADER include/spdk/sock.h 00:02:55.966 TEST_HEADER include/spdk/stdinc.h 00:02:55.966 TEST_HEADER include/spdk/string.h 00:02:55.966 TEST_HEADER include/spdk/thread.h 00:02:55.966 TEST_HEADER include/spdk/trace.h 00:02:55.966 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:55.966 TEST_HEADER include/spdk/trace_parser.h 00:02:55.966 TEST_HEADER include/spdk/tree.h 00:02:55.966 TEST_HEADER include/spdk/ublk.h 00:02:55.966 TEST_HEADER include/spdk/util.h 00:02:55.966 TEST_HEADER include/spdk/uuid.h 00:02:55.966 TEST_HEADER include/spdk/version.h 00:02:55.966 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:55.966 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:55.966 TEST_HEADER include/spdk/vhost.h 00:02:55.966 TEST_HEADER include/spdk/vmd.h 00:02:55.966 TEST_HEADER include/spdk/xor.h 00:02:55.966 TEST_HEADER include/spdk/zipf.h 00:02:55.966 CXX test/cpp_headers/accel.o 00:02:55.966 LINK lsvmd 00:02:55.966 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:55.966 CC test/rpc_client/rpc_client_test.o 00:02:55.966 LINK spdk_nvme 00:02:55.966 LINK spdk_top 00:02:55.966 LINK event_perf 00:02:55.966 CC examples/vmd/led/led.o 00:02:55.966 CXX test/cpp_headers/accel_module.o 00:02:55.966 CXX test/cpp_headers/assert.o 00:02:56.223 LINK rpc_client_test 00:02:56.223 CXX test/cpp_headers/barrier.o 00:02:56.224 CC app/fio/bdev/fio_plugin.o 00:02:56.224 LINK led 00:02:56.224 CC test/event/reactor/reactor.o 00:02:56.224 LINK nvme_fuzz 00:02:56.224 CXX test/cpp_headers/base64.o 00:02:56.224 CC test/env/vtophys/vtophys.o 00:02:56.481 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:56.481 LINK reactor 00:02:56.481 CC test/env/memory/memory_ut.o 00:02:56.481 LINK mem_callbacks 00:02:56.481 CXX test/cpp_headers/bdev.o 00:02:56.481 LINK vtophys 00:02:56.481 LINK env_dpdk_post_init 00:02:56.481 CC test/env/pci/pci_ut.o 00:02:56.481 CC examples/idxd/perf/perf.o 00:02:56.481 CXX test/cpp_headers/bdev_module.o 00:02:56.739 CC test/event/reactor_perf/reactor_perf.o 00:02:56.739 CXX test/cpp_headers/bdev_zone.o 00:02:56.739 LINK spdk_bdev 00:02:56.739 LINK reactor_perf 00:02:56.739 CXX test/cpp_headers/bit_array.o 00:02:56.997 LINK idxd_perf 00:02:56.997 CC test/accel/dif/dif.o 00:02:56.997 LINK pci_ut 00:02:56.997 CC test/blobfs/mkfs/mkfs.o 00:02:56.997 CC test/nvme/aer/aer.o 00:02:56.997 CXX test/cpp_headers/bit_pool.o 00:02:56.997 CC test/event/app_repeat/app_repeat.o 00:02:56.997 CC test/lvol/esnap/esnap.o 00:02:57.255 LINK mkfs 00:02:57.255 CXX test/cpp_headers/blob_bdev.o 00:02:57.255 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:57.255 LINK app_repeat 00:02:57.255 CC test/nvme/reset/reset.o 00:02:57.255 LINK aer 00:02:57.255 LINK dif 00:02:57.513 CXX test/cpp_headers/blobfs_bdev.o 00:02:57.513 LINK interrupt_tgt 00:02:57.513 CXX test/cpp_headers/blobfs.o 00:02:57.513 CC test/nvme/sgl/sgl.o 00:02:57.513 LINK iscsi_fuzz 00:02:57.513 LINK memory_ut 00:02:57.513 CC test/event/scheduler/scheduler.o 00:02:57.513 CXX test/cpp_headers/blob.o 00:02:57.513 LINK reset 00:02:57.771 CXX test/cpp_headers/conf.o 00:02:57.771 CC test/app/histogram_perf/histogram_perf.o 00:02:57.771 CXX test/cpp_headers/config.o 00:02:57.771 LINK sgl 00:02:57.771 LINK scheduler 00:02:57.771 CC test/app/jsoncat/jsoncat.o 00:02:57.771 CC examples/thread/thread/thread_ex.o 00:02:57.771 CC test/bdev/bdevio/bdevio.o 00:02:57.771 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:57.771 CC test/app/stub/stub.o 00:02:57.771 CXX test/cpp_headers/cpuset.o 00:02:58.029 LINK histogram_perf 00:02:58.029 LINK jsoncat 00:02:58.029 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:58.029 CC test/nvme/e2edp/nvme_dp.o 00:02:58.029 CXX test/cpp_headers/crc16.o 00:02:58.029 LINK stub 00:02:58.029 LINK thread 00:02:58.029 CC test/nvme/overhead/overhead.o 00:02:58.029 CC test/nvme/err_injection/err_injection.o 00:02:58.287 CC test/nvme/startup/startup.o 00:02:58.287 LINK bdevio 00:02:58.287 CXX test/cpp_headers/crc32.o 00:02:58.287 CXX test/cpp_headers/crc64.o 00:02:58.287 LINK nvme_dp 00:02:58.287 LINK err_injection 00:02:58.287 LINK vhost_fuzz 00:02:58.287 LINK overhead 00:02:58.287 LINK startup 00:02:58.287 CXX test/cpp_headers/dif.o 00:02:58.287 CXX test/cpp_headers/dma.o 00:02:58.546 CXX test/cpp_headers/endian.o 00:02:58.546 CC examples/sock/hello_world/hello_sock.o 00:02:58.546 CXX test/cpp_headers/env_dpdk.o 00:02:58.546 CC test/nvme/reserve/reserve.o 00:02:58.546 CXX test/cpp_headers/env.o 00:02:58.546 CC test/nvme/simple_copy/simple_copy.o 00:02:58.546 CC test/nvme/connect_stress/connect_stress.o 00:02:58.803 CC test/nvme/boot_partition/boot_partition.o 00:02:58.803 CC examples/accel/perf/accel_perf.o 00:02:58.803 LINK hello_sock 00:02:58.803 LINK reserve 00:02:58.803 CC examples/blob/hello_world/hello_blob.o 00:02:58.803 CXX test/cpp_headers/event.o 00:02:58.803 CC examples/blob/cli/blobcli.o 00:02:58.803 LINK boot_partition 00:02:58.803 LINK connect_stress 00:02:58.803 CXX test/cpp_headers/fd_group.o 00:02:58.803 LINK simple_copy 00:02:59.061 CXX test/cpp_headers/fd.o 00:02:59.061 LINK hello_blob 00:02:59.061 CXX test/cpp_headers/file.o 00:02:59.061 CC test/nvme/compliance/nvme_compliance.o 00:02:59.061 CC test/nvme/fused_ordering/fused_ordering.o 00:02:59.061 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:59.061 CC examples/nvme/hello_world/hello_world.o 00:02:59.319 LINK accel_perf 00:02:59.319 CC examples/nvme/reconnect/reconnect.o 00:02:59.319 CXX test/cpp_headers/ftl.o 00:02:59.320 LINK blobcli 00:02:59.320 CC test/nvme/fdp/fdp.o 00:02:59.320 LINK fused_ordering 00:02:59.320 LINK doorbell_aers 00:02:59.320 LINK hello_world 00:02:59.320 CXX test/cpp_headers/gpt_spec.o 00:02:59.320 LINK nvme_compliance 00:02:59.578 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:59.578 LINK reconnect 00:02:59.578 CXX test/cpp_headers/hexlify.o 00:02:59.578 CC test/nvme/cuse/cuse.o 00:02:59.578 CC examples/nvme/arbitration/arbitration.o 00:02:59.578 LINK fdp 00:02:59.578 CC examples/nvme/hotplug/hotplug.o 00:02:59.836 CC examples/bdev/hello_world/hello_bdev.o 00:02:59.836 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:59.836 CXX test/cpp_headers/histogram_data.o 00:02:59.836 CC examples/nvme/abort/abort.o 00:02:59.836 CXX test/cpp_headers/idxd.o 00:02:59.836 LINK hotplug 00:02:59.836 LINK cmb_copy 00:02:59.836 LINK hello_bdev 00:03:00.096 LINK nvme_manage 00:03:00.096 LINK arbitration 00:03:00.096 CXX test/cpp_headers/idxd_spec.o 00:03:00.096 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:00.096 CXX test/cpp_headers/init.o 00:03:00.096 CXX test/cpp_headers/ioat.o 00:03:00.096 CXX test/cpp_headers/ioat_spec.o 00:03:00.096 CXX test/cpp_headers/iscsi_spec.o 00:03:00.096 LINK abort 00:03:00.096 CXX test/cpp_headers/json.o 00:03:00.354 LINK pmr_persistence 00:03:00.355 CXX test/cpp_headers/jsonrpc.o 00:03:00.355 CC examples/bdev/bdevperf/bdevperf.o 00:03:00.355 CXX test/cpp_headers/keyring.o 00:03:00.355 CXX test/cpp_headers/keyring_module.o 00:03:00.355 CXX test/cpp_headers/likely.o 00:03:00.355 CXX test/cpp_headers/log.o 00:03:00.355 CXX test/cpp_headers/lvol.o 00:03:00.355 CXX test/cpp_headers/memory.o 00:03:00.355 CXX test/cpp_headers/mmio.o 00:03:00.619 CXX test/cpp_headers/nbd.o 00:03:00.619 CXX test/cpp_headers/net.o 00:03:00.619 CXX test/cpp_headers/notify.o 00:03:00.619 CXX test/cpp_headers/nvme.o 00:03:00.619 CXX test/cpp_headers/nvme_intel.o 00:03:00.619 CXX test/cpp_headers/nvme_ocssd.o 00:03:00.619 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:00.619 CXX test/cpp_headers/nvme_spec.o 00:03:00.619 CXX test/cpp_headers/nvme_zns.o 00:03:00.619 CXX test/cpp_headers/nvmf_cmd.o 00:03:00.619 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:00.619 CXX test/cpp_headers/nvmf.o 00:03:00.619 CXX test/cpp_headers/nvmf_spec.o 00:03:00.876 CXX test/cpp_headers/nvmf_transport.o 00:03:00.877 CXX test/cpp_headers/opal.o 00:03:00.877 CXX test/cpp_headers/opal_spec.o 00:03:00.877 CXX test/cpp_headers/pci_ids.o 00:03:00.877 CXX test/cpp_headers/pipe.o 00:03:00.877 CXX test/cpp_headers/queue.o 00:03:00.877 CXX test/cpp_headers/reduce.o 00:03:00.877 CXX test/cpp_headers/rpc.o 00:03:00.877 LINK cuse 00:03:00.877 CXX test/cpp_headers/scheduler.o 00:03:00.877 CXX test/cpp_headers/scsi.o 00:03:01.133 LINK bdevperf 00:03:01.133 CXX test/cpp_headers/scsi_spec.o 00:03:01.133 CXX test/cpp_headers/sock.o 00:03:01.133 CXX test/cpp_headers/stdinc.o 00:03:01.133 CXX test/cpp_headers/string.o 00:03:01.133 CXX test/cpp_headers/thread.o 00:03:01.133 CXX test/cpp_headers/trace.o 00:03:01.133 CXX test/cpp_headers/trace_parser.o 00:03:01.133 CXX test/cpp_headers/tree.o 00:03:01.133 CXX test/cpp_headers/ublk.o 00:03:01.133 CXX test/cpp_headers/util.o 00:03:01.133 CXX test/cpp_headers/uuid.o 00:03:01.392 CXX test/cpp_headers/version.o 00:03:01.392 CXX test/cpp_headers/vfio_user_pci.o 00:03:01.392 CXX test/cpp_headers/vfio_user_spec.o 00:03:01.392 CXX test/cpp_headers/vhost.o 00:03:01.392 CXX test/cpp_headers/vmd.o 00:03:01.392 CXX test/cpp_headers/xor.o 00:03:01.392 CXX test/cpp_headers/zipf.o 00:03:01.392 CC examples/nvmf/nvmf/nvmf.o 00:03:01.650 LINK nvmf 00:03:02.217 LINK esnap 00:03:02.784 00:03:02.784 real 1m2.147s 00:03:02.784 user 6m24.268s 00:03:02.784 sys 1m33.907s 00:03:02.784 21:24:47 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:02.784 21:24:47 make -- common/autotest_common.sh@10 -- $ set +x 00:03:02.784 ************************************ 00:03:02.784 END TEST make 00:03:02.784 ************************************ 00:03:02.784 21:24:47 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:02.784 21:24:47 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:02.784 21:24:47 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:02.784 21:24:47 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:02.784 21:24:47 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:02.784 21:24:47 -- pm/common@44 -- $ pid=5138 00:03:02.784 21:24:47 -- pm/common@50 -- $ kill -TERM 5138 00:03:02.784 21:24:47 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:02.784 21:24:47 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:02.784 21:24:47 -- pm/common@44 -- $ pid=5140 00:03:02.784 21:24:47 -- pm/common@50 -- $ kill -TERM 5140 00:03:02.785 21:24:47 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:02.785 21:24:47 -- nvmf/common.sh@7 -- # uname -s 00:03:02.785 21:24:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:02.785 21:24:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:02.785 21:24:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:02.785 21:24:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:02.785 21:24:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:02.785 21:24:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:02.785 21:24:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:02.785 21:24:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:02.785 21:24:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:02.785 21:24:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:02.785 21:24:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:03:02.785 21:24:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:03:02.785 21:24:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:02.785 21:24:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:02.785 21:24:47 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:03:02.785 21:24:47 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:02.785 21:24:47 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:02.785 21:24:47 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:02.785 21:24:47 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:02.785 21:24:47 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:02.785 21:24:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:02.785 21:24:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:02.785 21:24:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:02.785 21:24:47 -- paths/export.sh@5 -- # export PATH 00:03:02.785 21:24:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:02.785 21:24:47 -- nvmf/common.sh@47 -- # : 0 00:03:02.785 21:24:47 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:02.785 21:24:47 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:02.785 21:24:47 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:02.785 21:24:47 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:02.785 21:24:47 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:02.785 21:24:47 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:02.785 21:24:47 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:02.785 21:24:47 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:02.785 21:24:47 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:02.785 21:24:47 -- spdk/autotest.sh@32 -- # uname -s 00:03:02.785 21:24:47 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:02.785 21:24:47 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:02.785 21:24:47 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:02.785 21:24:47 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:02.785 21:24:47 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:02.785 21:24:47 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:02.785 21:24:47 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:02.785 21:24:47 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:02.785 21:24:47 -- spdk/autotest.sh@48 -- # udevadm_pid=52745 00:03:02.785 21:24:47 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:02.785 21:24:47 -- pm/common@17 -- # local monitor 00:03:02.785 21:24:47 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:02.785 21:24:47 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:02.785 21:24:47 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:02.785 21:24:47 -- pm/common@25 -- # sleep 1 00:03:02.785 21:24:47 -- pm/common@21 -- # date +%s 00:03:02.785 21:24:47 -- pm/common@21 -- # date +%s 00:03:02.785 21:24:47 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721856287 00:03:02.785 21:24:47 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721856287 00:03:02.785 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721856287_collect-vmstat.pm.log 00:03:02.785 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721856287_collect-cpu-load.pm.log 00:03:04.161 21:24:48 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:04.161 21:24:48 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:04.161 21:24:48 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:04.161 21:24:48 -- common/autotest_common.sh@10 -- # set +x 00:03:04.161 21:24:48 -- spdk/autotest.sh@59 -- # create_test_list 00:03:04.161 21:24:48 -- common/autotest_common.sh@748 -- # xtrace_disable 00:03:04.161 21:24:48 -- common/autotest_common.sh@10 -- # set +x 00:03:04.161 21:24:48 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:04.161 21:24:48 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:04.161 21:24:48 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:04.161 21:24:48 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:04.162 21:24:48 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:04.162 21:24:48 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:04.162 21:24:48 -- common/autotest_common.sh@1455 -- # uname 00:03:04.162 21:24:48 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:04.162 21:24:48 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:04.162 21:24:48 -- common/autotest_common.sh@1475 -- # uname 00:03:04.162 21:24:48 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:04.162 21:24:48 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:04.162 21:24:48 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:03:04.162 21:24:48 -- spdk/autotest.sh@72 -- # hash lcov 00:03:04.162 21:24:48 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:04.162 21:24:48 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:03:04.162 --rc lcov_branch_coverage=1 00:03:04.162 --rc lcov_function_coverage=1 00:03:04.162 --rc genhtml_branch_coverage=1 00:03:04.162 --rc genhtml_function_coverage=1 00:03:04.162 --rc genhtml_legend=1 00:03:04.162 --rc geninfo_all_blocks=1 00:03:04.162 ' 00:03:04.162 21:24:48 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:03:04.162 --rc lcov_branch_coverage=1 00:03:04.162 --rc lcov_function_coverage=1 00:03:04.162 --rc genhtml_branch_coverage=1 00:03:04.162 --rc genhtml_function_coverage=1 00:03:04.162 --rc genhtml_legend=1 00:03:04.162 --rc geninfo_all_blocks=1 00:03:04.162 ' 00:03:04.162 21:24:48 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:03:04.162 --rc lcov_branch_coverage=1 00:03:04.162 --rc lcov_function_coverage=1 00:03:04.162 --rc genhtml_branch_coverage=1 00:03:04.162 --rc genhtml_function_coverage=1 00:03:04.162 --rc genhtml_legend=1 00:03:04.162 --rc geninfo_all_blocks=1 00:03:04.162 --no-external' 00:03:04.162 21:24:48 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:03:04.162 --rc lcov_branch_coverage=1 00:03:04.162 --rc lcov_function_coverage=1 00:03:04.162 --rc genhtml_branch_coverage=1 00:03:04.162 --rc genhtml_function_coverage=1 00:03:04.162 --rc genhtml_legend=1 00:03:04.162 --rc geninfo_all_blocks=1 00:03:04.162 --no-external' 00:03:04.162 21:24:48 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:04.162 lcov: LCOV version 1.14 00:03:04.162 21:24:48 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:19.044 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:19.044 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:03:31.266 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:31.266 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:03:31.266 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:31.266 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:03:31.266 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:31.266 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:03:31.266 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:31.266 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:03:31.266 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:31.266 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:03:31.266 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:31.266 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:03:31.266 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:31.266 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:03:31.266 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:31.266 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:03:31.266 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:31.266 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:03:31.266 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:31.266 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:03:31.266 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:31.266 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:03:31.266 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:31.266 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:31.266 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:31.266 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:03:31.266 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:31.266 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:03:31.266 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:31.267 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:03:31.267 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:03:31.267 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:03:31.267 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:31.267 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:03:31.267 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:31.267 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:03:31.267 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:31.267 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:03:31.267 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:31.267 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:03:31.267 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:31.267 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:03:31.267 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:31.267 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:03:31.267 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:31.267 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:03:31.267 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:31.267 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:03:31.267 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:03:31.267 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:03:31.267 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:03:31.267 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:03:31.267 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:31.267 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:03:31.267 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:31.267 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:03:31.267 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:03:31.267 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:03:31.267 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:31.267 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:03:31.267 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:31.267 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:03:31.267 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:31.267 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:03:31.267 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:31.267 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:03:31.267 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:31.267 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:03:31.267 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:31.267 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:03:31.267 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:03:31.267 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:03:31.267 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:31.267 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:03:31.267 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:31.267 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:03:31.267 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:31.267 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:31.267 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:03:31.267 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:03:31.267 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:31.267 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:03:31.267 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:03:31.267 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:03:31.267 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:03:31.267 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:03:31.267 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:31.267 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:03:31.267 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:03:31.267 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:03:31.267 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:31.267 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:03:31.267 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:31.267 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:03:31.267 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:31.267 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:03:31.267 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:31.267 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:03:31.267 /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno:no functions found 00:03:31.267 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno 00:03:31.267 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:31.267 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:03:31.267 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:31.267 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:03:31.267 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:31.267 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:03:31.267 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:31.267 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:31.267 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:31.267 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:31.267 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:31.267 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:03:31.267 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:31.267 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:03:31.267 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:31.267 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:31.267 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:31.267 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:31.267 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:31.267 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:03:31.267 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:31.267 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:31.267 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:31.267 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:31.267 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:31.267 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:03:31.267 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:31.267 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:03:31.267 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:31.267 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:03:31.267 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:31.267 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:03:31.267 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:31.267 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:03:31.267 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:31.267 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:03:31.267 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:31.267 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:03:31.267 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:31.267 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:03:31.267 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:31.267 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:03:31.267 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:31.267 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:03:31.267 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:31.267 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:03:31.267 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:31.267 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:03:31.268 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:03:31.268 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:03:31.268 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:31.268 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:03:31.268 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:31.268 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:03:31.268 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:31.268 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:03:31.268 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:31.268 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:03:31.526 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:31.526 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:03:31.526 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:03:31.526 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:03:31.526 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:31.526 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:03:31.526 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:03:31.526 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:03:31.526 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:31.526 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:31.526 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:31.526 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:31.526 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:31.526 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:03:31.526 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:31.526 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:03:31.526 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:31.526 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:03:31.526 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:31.526 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:03:34.849 21:25:19 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:34.849 21:25:19 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:34.849 21:25:19 -- common/autotest_common.sh@10 -- # set +x 00:03:34.849 21:25:19 -- spdk/autotest.sh@91 -- # rm -f 00:03:34.849 21:25:19 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:35.108 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:35.108 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:03:35.108 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:03:35.108 21:25:19 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:35.108 21:25:19 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:35.108 21:25:19 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:35.108 21:25:19 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:35.108 21:25:19 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:35.108 21:25:19 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:35.108 21:25:19 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:35.108 21:25:19 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:35.108 21:25:19 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:35.108 21:25:19 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:35.108 21:25:19 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:03:35.108 21:25:19 -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:03:35.108 21:25:19 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:35.108 21:25:19 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:35.108 21:25:19 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:35.108 21:25:19 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n2 00:03:35.108 21:25:19 -- common/autotest_common.sh@1662 -- # local device=nvme1n2 00:03:35.108 21:25:19 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:03:35.108 21:25:19 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:35.108 21:25:19 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:35.108 21:25:19 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n3 00:03:35.109 21:25:19 -- common/autotest_common.sh@1662 -- # local device=nvme1n3 00:03:35.109 21:25:19 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:03:35.109 21:25:19 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:35.109 21:25:19 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:35.109 21:25:19 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:35.109 21:25:19 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:35.109 21:25:19 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:35.109 21:25:19 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:35.109 21:25:19 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:35.109 No valid GPT data, bailing 00:03:35.109 21:25:20 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:35.109 21:25:20 -- scripts/common.sh@391 -- # pt= 00:03:35.109 21:25:20 -- scripts/common.sh@392 -- # return 1 00:03:35.109 21:25:20 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:35.109 1+0 records in 00:03:35.109 1+0 records out 00:03:35.109 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0046147 s, 227 MB/s 00:03:35.109 21:25:20 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:35.109 21:25:20 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:35.109 21:25:20 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:03:35.109 21:25:20 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:03:35.109 21:25:20 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:03:35.109 No valid GPT data, bailing 00:03:35.109 21:25:20 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:35.109 21:25:20 -- scripts/common.sh@391 -- # pt= 00:03:35.109 21:25:20 -- scripts/common.sh@392 -- # return 1 00:03:35.109 21:25:20 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:03:35.367 1+0 records in 00:03:35.367 1+0 records out 00:03:35.367 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00466483 s, 225 MB/s 00:03:35.367 21:25:20 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:35.367 21:25:20 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:35.367 21:25:20 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n2 00:03:35.367 21:25:20 -- scripts/common.sh@378 -- # local block=/dev/nvme1n2 pt 00:03:35.367 21:25:20 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:03:35.367 No valid GPT data, bailing 00:03:35.367 21:25:20 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:03:35.367 21:25:20 -- scripts/common.sh@391 -- # pt= 00:03:35.367 21:25:20 -- scripts/common.sh@392 -- # return 1 00:03:35.367 21:25:20 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:03:35.367 1+0 records in 00:03:35.367 1+0 records out 00:03:35.367 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00377086 s, 278 MB/s 00:03:35.367 21:25:20 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:35.367 21:25:20 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:35.367 21:25:20 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n3 00:03:35.367 21:25:20 -- scripts/common.sh@378 -- # local block=/dev/nvme1n3 pt 00:03:35.367 21:25:20 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:03:35.367 No valid GPT data, bailing 00:03:35.367 21:25:20 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:03:35.367 21:25:20 -- scripts/common.sh@391 -- # pt= 00:03:35.367 21:25:20 -- scripts/common.sh@392 -- # return 1 00:03:35.367 21:25:20 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:03:35.367 1+0 records in 00:03:35.367 1+0 records out 00:03:35.367 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00412651 s, 254 MB/s 00:03:35.367 21:25:20 -- spdk/autotest.sh@118 -- # sync 00:03:35.367 21:25:20 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:35.367 21:25:20 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:35.367 21:25:20 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:37.270 21:25:22 -- spdk/autotest.sh@124 -- # uname -s 00:03:37.270 21:25:22 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:37.270 21:25:22 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:03:37.270 21:25:22 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:37.270 21:25:22 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:37.270 21:25:22 -- common/autotest_common.sh@10 -- # set +x 00:03:37.270 ************************************ 00:03:37.270 START TEST setup.sh 00:03:37.270 ************************************ 00:03:37.270 21:25:22 setup.sh -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:03:37.270 * Looking for test storage... 00:03:37.270 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:37.270 21:25:22 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:37.270 21:25:22 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:37.270 21:25:22 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:03:37.270 21:25:22 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:37.270 21:25:22 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:37.270 21:25:22 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:37.270 ************************************ 00:03:37.270 START TEST acl 00:03:37.270 ************************************ 00:03:37.270 21:25:22 setup.sh.acl -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:03:37.270 * Looking for test storage... 00:03:37.270 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:37.270 21:25:22 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:37.270 21:25:22 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:37.270 21:25:22 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:37.270 21:25:22 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:37.270 21:25:22 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:37.270 21:25:22 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:37.270 21:25:22 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:37.271 21:25:22 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:37.271 21:25:22 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:37.271 21:25:22 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:37.271 21:25:22 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:03:37.271 21:25:22 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:03:37.271 21:25:22 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:37.271 21:25:22 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:37.271 21:25:22 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:37.271 21:25:22 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n2 00:03:37.271 21:25:22 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n2 00:03:37.271 21:25:22 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:03:37.271 21:25:22 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:37.271 21:25:22 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:37.271 21:25:22 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n3 00:03:37.271 21:25:22 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n3 00:03:37.271 21:25:22 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:03:37.271 21:25:22 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:37.271 21:25:22 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:37.271 21:25:22 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:37.271 21:25:22 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:37.271 21:25:22 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:37.271 21:25:22 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:37.271 21:25:22 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:37.271 21:25:22 setup.sh.acl -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:38.207 21:25:22 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:38.207 21:25:22 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:38.207 21:25:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:38.207 21:25:22 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:38.207 21:25:22 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:38.207 21:25:22 setup.sh.acl -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:38.775 21:25:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:03:38.775 21:25:23 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:38.775 21:25:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:38.775 Hugepages 00:03:38.775 node hugesize free / total 00:03:38.775 21:25:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:38.775 21:25:23 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:38.775 21:25:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:38.775 00:03:38.775 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:38.775 21:25:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:38.775 21:25:23 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:38.775 21:25:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:38.775 21:25:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:03:38.775 21:25:23 setup.sh.acl -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:03:38.775 21:25:23 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:38.775 21:25:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:39.034 21:25:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:03:39.034 21:25:23 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:39.034 21:25:23 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:03:39.034 21:25:23 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:39.034 21:25:23 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:39.034 21:25:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:39.034 21:25:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:03:39.034 21:25:23 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:39.034 21:25:23 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:03:39.034 21:25:23 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:39.034 21:25:23 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:39.034 21:25:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:39.034 21:25:23 setup.sh.acl -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:03:39.034 21:25:23 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:39.034 21:25:23 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:39.034 21:25:23 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:39.034 21:25:23 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:39.034 ************************************ 00:03:39.034 START TEST denied 00:03:39.034 ************************************ 00:03:39.034 21:25:23 setup.sh.acl.denied -- common/autotest_common.sh@1125 -- # denied 00:03:39.034 21:25:23 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:03:39.034 21:25:23 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:39.034 21:25:23 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:03:39.034 21:25:23 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:39.034 21:25:23 setup.sh.acl.denied -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:39.970 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:03:39.970 21:25:24 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:03:39.970 21:25:24 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:39.970 21:25:24 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:39.970 21:25:24 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:03:39.970 21:25:24 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:03:39.970 21:25:24 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:39.970 21:25:24 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:39.970 21:25:24 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:39.970 21:25:24 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:39.970 21:25:24 setup.sh.acl.denied -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:40.537 00:03:40.537 real 0m1.423s 00:03:40.537 user 0m0.573s 00:03:40.537 sys 0m0.785s 00:03:40.537 21:25:25 setup.sh.acl.denied -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:40.537 ************************************ 00:03:40.537 END TEST denied 00:03:40.537 21:25:25 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:40.537 ************************************ 00:03:40.537 21:25:25 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:40.537 21:25:25 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:40.537 21:25:25 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:40.537 21:25:25 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:40.537 ************************************ 00:03:40.537 START TEST allowed 00:03:40.537 ************************************ 00:03:40.537 21:25:25 setup.sh.acl.allowed -- common/autotest_common.sh@1125 -- # allowed 00:03:40.537 21:25:25 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:03:40.537 21:25:25 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:40.537 21:25:25 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:40.537 21:25:25 setup.sh.acl.allowed -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:40.537 21:25:25 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:03:41.473 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:03:41.473 21:25:26 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 0000:00:11.0 00:03:41.473 21:25:26 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:41.473 21:25:26 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:03:41.473 21:25:26 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:03:41.473 21:25:26 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:03:41.473 21:25:26 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:41.473 21:25:26 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:41.473 21:25:26 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:41.473 21:25:26 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:41.473 21:25:26 setup.sh.acl.allowed -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:42.040 00:03:42.040 real 0m1.520s 00:03:42.040 user 0m0.650s 00:03:42.040 sys 0m0.859s 00:03:42.040 21:25:26 setup.sh.acl.allowed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:42.040 21:25:26 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:42.040 ************************************ 00:03:42.040 END TEST allowed 00:03:42.040 ************************************ 00:03:42.040 00:03:42.040 real 0m4.752s 00:03:42.040 user 0m2.068s 00:03:42.040 sys 0m2.609s 00:03:42.040 21:25:26 setup.sh.acl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:42.040 21:25:26 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:42.040 ************************************ 00:03:42.040 END TEST acl 00:03:42.040 ************************************ 00:03:42.040 21:25:26 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:03:42.040 21:25:26 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:42.040 21:25:26 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:42.040 21:25:26 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:42.040 ************************************ 00:03:42.040 START TEST hugepages 00:03:42.040 ************************************ 00:03:42.040 21:25:26 setup.sh.hugepages -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:03:42.300 * Looking for test storage... 00:03:42.300 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:42.300 21:25:27 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:42.300 21:25:27 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:42.300 21:25:27 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:42.300 21:25:27 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:42.300 21:25:27 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:42.300 21:25:27 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:42.300 21:25:27 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:42.300 21:25:27 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:42.300 21:25:27 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:42.300 21:25:27 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:42.300 21:25:27 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.300 21:25:27 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:42.300 21:25:27 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:42.300 21:25:27 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.300 21:25:27 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.300 21:25:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.301 21:25:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.301 21:25:27 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6001828 kB' 'MemAvailable: 7383084 kB' 'Buffers: 2436 kB' 'Cached: 1595484 kB' 'SwapCached: 0 kB' 'Active: 435596 kB' 'Inactive: 1266572 kB' 'Active(anon): 114736 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1266572 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 304 kB' 'Writeback: 0 kB' 'AnonPages: 105936 kB' 'Mapped: 48984 kB' 'Shmem: 10488 kB' 'KReclaimable: 61524 kB' 'Slab: 139008 kB' 'SReclaimable: 61524 kB' 'SUnreclaim: 77484 kB' 'KernelStack: 6348 kB' 'PageTables: 4268 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412436 kB' 'Committed_AS: 341504 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 5064704 kB' 'DirectMap1G: 9437184 kB' 00:03:42.301 21:25:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.301 21:25:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.301 21:25:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.301 21:25:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.301 21:25:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.301 21:25:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.301 21:25:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.301 21:25:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.301 21:25:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.301 21:25:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.301 21:25:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.301 21:25:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.301 21:25:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.301 21:25:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.301 21:25:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.301 21:25:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.301 21:25:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.301 21:25:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.301 21:25:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.301 21:25:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.301 21:25:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.301 21:25:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.301 21:25:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.301 21:25:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.301 21:25:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.301 21:25:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.301 21:25:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.301 21:25:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.301 21:25:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.301 21:25:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.301 21:25:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.301 21:25:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.301 21:25:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.301 21:25:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.301 21:25:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.301 21:25:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.301 21:25:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.301 21:25:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.301 21:25:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.301 21:25:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.301 21:25:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.301 21:25:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.301 21:25:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.301 21:25:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.301 21:25:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.301 21:25:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.301 21:25:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.301 21:25:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.301 21:25:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.301 21:25:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.301 21:25:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.301 21:25:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.301 21:25:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.301 21:25:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.301 21:25:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.301 21:25:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.301 21:25:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.301 21:25:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.301 21:25:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.301 21:25:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.301 21:25:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.301 21:25:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.301 21:25:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.301 21:25:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.301 21:25:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.301 21:25:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.301 21:25:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.301 21:25:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.301 21:25:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.301 21:25:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.301 21:25:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.301 21:25:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.301 21:25:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.301 21:25:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.301 21:25:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.301 21:25:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.301 21:25:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.301 21:25:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.301 21:25:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.301 21:25:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.301 21:25:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.301 21:25:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.301 21:25:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.301 21:25:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.301 21:25:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.301 21:25:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.301 21:25:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.301 21:25:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.301 21:25:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.301 21:25:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.301 21:25:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.301 21:25:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.301 21:25:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.301 21:25:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.301 21:25:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.301 21:25:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.301 21:25:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.301 21:25:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.301 21:25:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.301 21:25:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.301 21:25:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.301 21:25:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.301 21:25:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.301 21:25:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.301 21:25:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.301 21:25:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.301 21:25:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.301 21:25:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.301 21:25:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.301 21:25:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.301 21:25:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.301 21:25:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.301 21:25:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.301 21:25:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.301 21:25:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.301 21:25:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.301 21:25:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.301 21:25:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.301 21:25:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.302 21:25:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.302 21:25:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.302 21:25:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.302 21:25:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.302 21:25:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.302 21:25:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.302 21:25:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.302 21:25:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.302 21:25:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.302 21:25:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.302 21:25:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.302 21:25:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.302 21:25:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.302 21:25:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.302 21:25:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.302 21:25:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.302 21:25:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.302 21:25:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.302 21:25:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.302 21:25:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.302 21:25:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.302 21:25:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.302 21:25:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.302 21:25:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.302 21:25:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.302 21:25:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.302 21:25:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.302 21:25:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.302 21:25:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.302 21:25:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.302 21:25:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.302 21:25:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.302 21:25:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.302 21:25:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.302 21:25:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.302 21:25:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.302 21:25:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.302 21:25:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.302 21:25:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.302 21:25:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.302 21:25:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.302 21:25:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.302 21:25:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.302 21:25:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.302 21:25:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.302 21:25:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.302 21:25:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.302 21:25:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.302 21:25:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.302 21:25:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.302 21:25:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.302 21:25:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.302 21:25:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.302 21:25:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.302 21:25:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.302 21:25:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.302 21:25:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.302 21:25:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.302 21:25:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.302 21:25:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.302 21:25:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.302 21:25:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.302 21:25:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.302 21:25:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.302 21:25:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.302 21:25:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.302 21:25:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.302 21:25:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.302 21:25:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.302 21:25:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.302 21:25:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.302 21:25:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.302 21:25:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.302 21:25:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.302 21:25:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.302 21:25:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.302 21:25:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.302 21:25:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.302 21:25:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.302 21:25:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.302 21:25:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.302 21:25:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.302 21:25:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.302 21:25:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.302 21:25:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.302 21:25:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.302 21:25:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.302 21:25:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.302 21:25:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.302 21:25:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.302 21:25:27 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:42.302 21:25:27 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:42.302 21:25:27 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:42.302 21:25:27 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:42.302 21:25:27 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:42.302 21:25:27 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:42.302 21:25:27 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:42.302 21:25:27 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:42.302 21:25:27 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:42.302 21:25:27 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:42.302 21:25:27 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:42.302 21:25:27 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:42.302 21:25:27 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:42.302 21:25:27 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:42.302 21:25:27 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:42.302 21:25:27 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:42.302 21:25:27 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:42.302 21:25:27 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:42.302 21:25:27 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:42.302 21:25:27 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:42.302 21:25:27 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:42.302 21:25:27 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:42.302 21:25:27 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:42.302 21:25:27 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:42.302 21:25:27 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:42.302 21:25:27 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:42.302 21:25:27 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:42.302 21:25:27 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:42.302 ************************************ 00:03:42.302 START TEST default_setup 00:03:42.302 ************************************ 00:03:42.302 21:25:27 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1125 -- # default_setup 00:03:42.302 21:25:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:42.302 21:25:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:42.302 21:25:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:42.302 21:25:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:42.302 21:25:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:42.302 21:25:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:42.302 21:25:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:42.302 21:25:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:42.302 21:25:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:42.302 21:25:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:42.302 21:25:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:42.302 21:25:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:42.303 21:25:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:42.303 21:25:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:42.303 21:25:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:42.303 21:25:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:42.303 21:25:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:42.303 21:25:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:42.303 21:25:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:42.303 21:25:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:42.303 21:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:42.303 21:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:42.871 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:43.134 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:03:43.134 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:03:43.134 21:25:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:43.134 21:25:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:43.134 21:25:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:43.134 21:25:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:43.134 21:25:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:43.134 21:25:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:43.134 21:25:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:43.134 21:25:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:43.134 21:25:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:43.134 21:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:43.134 21:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:43.134 21:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:43.134 21:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:43.134 21:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.134 21:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:43.134 21:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:43.134 21:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.134 21:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.134 21:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.134 21:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.135 21:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8100484 kB' 'MemAvailable: 9481552 kB' 'Buffers: 2436 kB' 'Cached: 1595476 kB' 'SwapCached: 0 kB' 'Active: 451984 kB' 'Inactive: 1266576 kB' 'Active(anon): 131124 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1266576 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 122324 kB' 'Mapped: 48848 kB' 'Shmem: 10464 kB' 'KReclaimable: 61140 kB' 'Slab: 138596 kB' 'SReclaimable: 61140 kB' 'SUnreclaim: 77456 kB' 'KernelStack: 6320 kB' 'PageTables: 4332 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 360504 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 5064704 kB' 'DirectMap1G: 9437184 kB' 00:03:43.135 21:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.135 21:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.135 21:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.135 21:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.135 21:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.135 21:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.135 21:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.135 21:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.135 21:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.135 21:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.135 21:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.135 21:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.135 21:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.135 21:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.135 21:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.135 21:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.135 21:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.135 21:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.135 21:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.135 21:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.135 21:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.135 21:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.135 21:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.135 21:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.135 21:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.135 21:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.135 21:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.135 21:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.135 21:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.135 21:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.135 21:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.135 21:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.135 21:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.135 21:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.135 21:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.135 21:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.135 21:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.135 21:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.135 21:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.135 21:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.135 21:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.135 21:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.135 21:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.135 21:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.135 21:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.135 21:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.135 21:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.135 21:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.135 21:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.135 21:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.135 21:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.135 21:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.135 21:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.135 21:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.135 21:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.135 21:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.135 21:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.135 21:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.135 21:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.135 21:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.135 21:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.135 21:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.135 21:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.135 21:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.135 21:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.135 21:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.135 21:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.135 21:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.135 21:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.135 21:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.135 21:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.135 21:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.135 21:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.135 21:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.135 21:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.135 21:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.135 21:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.135 21:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.135 21:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.135 21:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.135 21:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.135 21:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.135 21:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.135 21:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.135 21:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.135 21:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.135 21:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.135 21:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.135 21:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.135 21:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.135 21:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.135 21:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.135 21:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.135 21:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.135 21:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.135 21:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.135 21:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.135 21:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.135 21:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.135 21:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.135 21:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.135 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.135 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.135 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.135 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.135 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.135 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.135 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.135 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.135 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.135 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.135 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.135 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.135 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.135 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.135 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.136 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.136 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.136 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.136 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.136 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.136 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.136 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.136 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.136 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.136 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.136 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.136 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.136 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.136 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.136 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.136 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.136 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.136 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.136 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.136 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.136 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.136 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.136 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.136 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.136 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.136 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.136 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.136 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.136 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.136 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.136 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.136 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.136 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.136 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.136 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.136 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.136 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.136 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.136 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.136 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.136 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.136 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.136 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.136 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.136 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.136 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:43.136 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:43.136 21:25:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:43.136 21:25:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:43.136 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:43.136 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:43.136 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:43.136 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:43.136 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.136 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:43.136 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:43.136 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.136 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.136 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.136 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.136 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8100932 kB' 'MemAvailable: 9482004 kB' 'Buffers: 2436 kB' 'Cached: 1595476 kB' 'SwapCached: 0 kB' 'Active: 451796 kB' 'Inactive: 1266580 kB' 'Active(anon): 130936 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1266580 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 122164 kB' 'Mapped: 48728 kB' 'Shmem: 10464 kB' 'KReclaimable: 61140 kB' 'Slab: 138568 kB' 'SReclaimable: 61140 kB' 'SUnreclaim: 77428 kB' 'KernelStack: 6192 kB' 'PageTables: 3960 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 358560 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54580 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 5064704 kB' 'DirectMap1G: 9437184 kB' 00:03:43.136 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.136 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.136 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.136 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.136 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.136 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.136 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.136 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.136 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.136 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.136 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.136 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.136 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.136 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.136 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.136 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.136 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.136 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.136 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.136 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.136 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.136 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.136 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.136 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.136 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.136 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.136 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.136 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.136 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.136 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.136 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.136 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.136 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.136 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.136 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.136 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.136 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.136 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.136 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.136 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.136 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.136 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.136 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.136 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.136 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.136 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.136 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.136 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.137 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.137 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.137 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.137 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.137 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.137 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.137 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.137 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.137 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.137 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.137 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.137 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.137 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.137 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.137 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.137 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.137 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.137 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.137 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.137 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.137 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.137 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.137 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.137 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.137 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.137 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.137 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.137 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.137 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.137 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.137 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.137 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.137 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.137 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.137 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.137 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.137 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.137 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.137 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.137 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.137 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.137 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.137 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.137 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.137 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.137 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.137 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.137 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.137 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.137 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.137 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.137 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.137 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.137 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.137 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.137 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.137 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.137 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.137 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.137 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.137 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.137 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.137 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.137 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.137 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.137 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.137 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.137 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.137 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.137 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.137 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.137 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.137 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.137 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.137 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.137 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.137 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.137 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.137 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.137 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.137 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.137 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.137 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.137 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.137 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.137 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.137 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.137 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.137 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.137 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.137 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.137 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.137 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.137 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.137 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.137 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.137 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.137 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.137 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.137 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.137 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.137 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.137 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.137 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.137 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.137 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.137 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.137 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.137 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.137 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.137 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.137 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.137 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.137 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.137 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.137 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.137 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.137 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.137 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.137 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.137 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.137 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.138 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.138 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.138 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.138 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.138 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.138 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.138 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.138 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.138 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.138 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.138 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.138 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.138 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.138 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.138 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.138 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.138 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.138 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.138 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.138 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.138 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.138 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.138 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.138 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.138 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.138 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.138 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.138 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.138 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.138 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.138 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.138 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.138 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.138 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.138 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.138 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:43.138 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:43.138 21:25:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:43.138 21:25:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:43.138 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:43.138 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:43.138 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:43.138 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:43.138 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.138 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:43.138 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:43.138 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.138 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.138 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.138 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8100932 kB' 'MemAvailable: 9482008 kB' 'Buffers: 2436 kB' 'Cached: 1595480 kB' 'SwapCached: 0 kB' 'Active: 451896 kB' 'Inactive: 1266584 kB' 'Active(anon): 131036 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1266584 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 122180 kB' 'Mapped: 48728 kB' 'Shmem: 10464 kB' 'KReclaimable: 61140 kB' 'Slab: 138536 kB' 'SReclaimable: 61140 kB' 'SUnreclaim: 77396 kB' 'KernelStack: 6272 kB' 'PageTables: 4184 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 358560 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54564 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 5064704 kB' 'DirectMap1G: 9437184 kB' 00:03:43.138 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.138 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.138 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.138 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.138 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.138 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.138 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.138 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.138 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.138 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.138 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.138 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.138 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.138 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.138 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.138 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.138 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.138 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.138 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.138 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.138 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.138 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.138 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.138 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.138 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.138 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.138 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.138 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.138 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.138 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.138 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.138 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.138 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.138 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.138 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.138 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.138 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.138 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.138 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.138 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.138 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.138 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.138 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.138 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.138 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.138 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.138 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.138 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.138 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.138 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.138 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.138 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.138 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.138 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.138 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.138 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.138 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.138 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.138 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.139 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.139 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.139 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.139 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.139 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.139 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.139 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.139 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.139 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.139 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.139 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.139 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.139 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.139 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.139 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.139 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.139 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.139 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.139 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.139 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.139 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.139 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.139 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.139 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.139 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.139 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.139 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.139 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.139 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.139 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.139 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.139 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.139 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.139 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.139 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.139 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.139 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.139 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.139 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.139 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.139 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.139 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.139 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.139 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.139 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.139 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.139 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.139 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.139 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.139 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.139 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.139 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.139 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.139 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.139 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.139 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.139 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.139 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.139 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.139 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.139 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.139 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.139 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.139 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.139 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.139 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.139 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.139 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.139 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.139 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.139 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.139 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.139 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.139 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.139 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.139 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.139 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.139 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.139 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.139 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.139 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.139 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.139 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.139 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.139 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.139 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.139 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.139 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.139 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.139 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.139 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.139 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.139 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.139 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.139 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.139 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.139 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.140 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.140 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.140 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.140 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.140 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.140 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.140 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.140 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.140 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.140 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.140 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.140 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.140 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.140 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.140 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.140 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.140 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.140 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.140 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.140 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.140 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.140 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.140 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.140 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.140 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.140 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.140 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.140 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.140 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.140 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.140 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.140 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.140 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.140 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.140 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.140 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.140 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.140 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.140 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.140 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.140 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.140 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.140 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.140 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.140 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.140 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.140 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:43.140 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:43.140 nr_hugepages=1024 00:03:43.140 21:25:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:43.140 21:25:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:43.140 resv_hugepages=0 00:03:43.140 21:25:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:43.140 surplus_hugepages=0 00:03:43.140 21:25:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:43.140 anon_hugepages=0 00:03:43.140 21:25:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:43.140 21:25:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:43.140 21:25:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:43.140 21:25:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:43.140 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:43.140 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:43.140 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:43.140 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:43.140 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.140 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:43.140 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:43.140 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.140 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.140 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.140 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.140 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8100932 kB' 'MemAvailable: 9482008 kB' 'Buffers: 2436 kB' 'Cached: 1595480 kB' 'SwapCached: 0 kB' 'Active: 451920 kB' 'Inactive: 1266584 kB' 'Active(anon): 131060 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1266584 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 122200 kB' 'Mapped: 48728 kB' 'Shmem: 10464 kB' 'KReclaimable: 61140 kB' 'Slab: 138536 kB' 'SReclaimable: 61140 kB' 'SUnreclaim: 77396 kB' 'KernelStack: 6256 kB' 'PageTables: 4132 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 358560 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54564 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 5064704 kB' 'DirectMap1G: 9437184 kB' 00:03:43.140 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.140 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.140 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.140 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.140 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.140 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.140 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.140 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.140 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.140 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.140 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.140 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.140 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.140 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.140 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.140 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.140 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.140 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.140 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.140 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.140 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.140 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.140 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.140 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.140 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.140 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.140 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.140 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.140 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.140 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.140 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.140 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.140 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.140 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.140 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.140 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.140 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.140 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.141 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.141 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.141 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.141 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.141 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.141 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.141 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.141 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.141 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.141 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.141 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.141 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.141 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.141 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.141 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.141 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.141 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.141 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.141 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.141 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.141 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.141 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.141 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.141 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.141 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.141 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.141 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.141 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.141 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.141 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.141 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.141 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.141 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.141 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.141 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.141 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.141 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.141 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.141 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.141 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.141 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.141 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.141 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.141 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.141 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.141 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.141 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.141 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.141 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.141 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.141 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.141 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.141 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.141 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.141 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.141 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.141 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.141 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.141 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.141 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.141 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.141 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.141 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.141 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.141 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.141 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.141 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.141 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.141 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.141 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.141 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.141 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.141 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.141 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.141 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.141 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.141 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.141 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.141 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.141 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.141 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.141 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.141 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.141 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.141 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.141 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.141 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.141 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.141 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.141 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.141 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.141 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.141 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.141 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.141 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.141 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.141 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.141 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.141 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.141 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.141 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.141 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.141 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.141 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.141 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.141 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.141 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.141 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.141 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.141 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.141 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.141 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.141 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.141 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.141 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.141 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.141 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.141 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.141 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.141 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.142 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.142 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.142 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.142 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.142 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.142 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.142 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.142 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.142 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.142 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.142 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.142 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.142 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.142 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.142 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.142 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.142 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.142 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.142 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.142 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.142 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.142 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.142 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.142 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.142 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.142 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.142 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.142 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.142 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.142 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.142 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.142 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.142 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.142 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.142 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.142 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:43.142 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:43.142 21:25:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:43.142 21:25:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:43.142 21:25:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:43.142 21:25:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:43.402 21:25:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:43.402 21:25:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:43.402 21:25:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:43.402 21:25:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:43.402 21:25:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:43.402 21:25:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:43.402 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:43.402 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:43.402 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:43.402 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:43.402 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.402 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:43.402 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:43.402 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.402 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.402 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.402 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.402 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8100932 kB' 'MemUsed: 4141040 kB' 'SwapCached: 0 kB' 'Active: 451632 kB' 'Inactive: 1266584 kB' 'Active(anon): 130772 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1266584 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'FilePages: 1597916 kB' 'Mapped: 48728 kB' 'AnonPages: 121916 kB' 'Shmem: 10464 kB' 'KernelStack: 6272 kB' 'PageTables: 4184 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61140 kB' 'Slab: 138536 kB' 'SReclaimable: 61140 kB' 'SUnreclaim: 77396 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:43.402 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.402 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.402 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.402 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.402 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.402 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.402 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.402 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.402 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.402 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.402 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.402 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.402 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.402 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.402 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.403 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.403 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.403 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.403 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.403 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.403 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.403 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.403 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.403 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.403 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.403 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.403 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.403 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.403 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.403 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.403 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.403 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.403 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.403 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.403 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.403 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.403 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.403 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.403 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.403 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.403 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.403 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.403 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.403 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.403 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.403 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.403 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.403 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.403 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.403 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.403 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.403 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.403 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.403 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.403 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.403 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.403 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.403 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.403 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.403 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.403 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.403 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.403 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.403 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.403 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.403 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.403 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.403 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.403 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.403 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.403 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.403 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.403 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.403 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.403 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.403 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.403 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.403 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.403 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.403 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.403 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.403 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.403 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.403 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.403 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.403 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.403 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.403 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.403 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.403 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.403 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.403 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.403 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.403 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.403 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.403 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.403 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.403 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.403 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.403 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.403 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.403 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.403 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.403 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.403 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.403 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.403 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.403 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.403 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.403 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.403 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.403 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.403 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.403 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.403 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.403 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.403 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.403 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.403 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.403 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.403 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.403 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.403 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.403 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.403 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.403 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.403 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.403 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.403 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.403 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.403 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.403 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.403 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.403 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.403 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.403 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.404 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.404 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.404 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.404 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.404 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.404 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.404 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.404 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.404 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.404 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:43.404 21:25:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:43.404 21:25:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:43.404 21:25:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:43.404 21:25:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:43.404 21:25:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:43.404 node0=1024 expecting 1024 00:03:43.404 21:25:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:43.404 21:25:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:43.404 00:03:43.404 real 0m1.050s 00:03:43.404 user 0m0.502s 00:03:43.404 sys 0m0.469s 00:03:43.404 21:25:28 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:43.404 21:25:28 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:43.404 ************************************ 00:03:43.404 END TEST default_setup 00:03:43.404 ************************************ 00:03:43.404 21:25:28 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:43.404 21:25:28 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:43.404 21:25:28 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:43.404 21:25:28 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:43.404 ************************************ 00:03:43.404 START TEST per_node_1G_alloc 00:03:43.404 ************************************ 00:03:43.404 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1125 -- # per_node_1G_alloc 00:03:43.404 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:43.404 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:03:43.404 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:43.404 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:43.404 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:43.404 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:43.404 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:43.404 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:43.404 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:43.404 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:43.404 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:43.404 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:43.404 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:43.404 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:43.404 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:43.404 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:43.404 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:43.404 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:43.404 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:43.404 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:43.404 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:43.404 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0 00:03:43.404 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:43.404 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:43.404 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:43.665 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:43.665 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:43.665 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:43.665 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:03:43.665 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:43.665 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:43.665 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:43.665 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:43.665 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:43.666 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:43.666 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:43.666 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:43.666 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:43.666 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:43.666 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:43.666 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:43.666 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:43.666 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.666 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:43.666 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:43.666 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.666 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.666 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.666 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9152844 kB' 'MemAvailable: 10533924 kB' 'Buffers: 2436 kB' 'Cached: 1595476 kB' 'SwapCached: 0 kB' 'Active: 452384 kB' 'Inactive: 1266588 kB' 'Active(anon): 131524 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1266588 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 122440 kB' 'Mapped: 48832 kB' 'Shmem: 10464 kB' 'KReclaimable: 61140 kB' 'Slab: 138580 kB' 'SReclaimable: 61140 kB' 'SUnreclaim: 77440 kB' 'KernelStack: 6288 kB' 'PageTables: 4260 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 358560 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 5064704 kB' 'DirectMap1G: 9437184 kB' 00:03:43.666 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.666 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.666 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.666 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.666 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.666 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.666 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.666 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.666 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.666 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.666 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.666 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.666 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.666 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.666 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.666 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.666 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.666 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.666 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.666 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.666 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.666 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.666 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.666 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.666 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.666 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.666 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.666 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.666 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.666 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.666 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.666 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.666 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.666 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.666 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.666 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.666 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.666 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.666 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.666 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.666 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.666 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.666 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.666 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.666 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.666 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.666 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.666 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.666 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.666 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.666 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.666 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.666 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.666 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.666 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.666 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.666 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.666 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.666 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.666 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.666 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.666 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.666 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.666 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.666 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.666 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.666 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.666 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.666 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.666 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.666 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.666 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.666 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.666 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.666 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.666 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.666 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.666 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.666 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.666 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.666 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.666 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.667 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.667 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.667 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.667 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.667 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.667 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.667 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.667 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.667 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.667 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.667 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.667 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.667 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.667 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.667 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.667 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.667 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.667 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.667 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.667 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.667 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.667 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.667 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.667 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.667 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.667 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.667 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.667 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.667 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.667 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.667 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.667 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.667 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.667 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.667 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.667 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.667 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.667 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.667 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.667 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.667 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.667 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.667 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.667 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.667 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.667 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.667 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.667 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.667 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.667 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.667 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.667 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.667 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.667 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.667 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.667 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.667 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.667 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.667 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.667 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.667 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.667 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.667 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.667 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.667 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.667 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.667 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.667 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.667 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.667 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.667 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.667 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.667 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.667 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.667 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.667 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.667 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.667 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.667 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.667 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.667 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:43.667 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:43.667 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:43.667 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:43.667 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:43.667 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:43.667 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:43.667 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:43.667 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.667 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:43.667 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:43.667 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.667 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.667 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.667 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.667 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9153100 kB' 'MemAvailable: 10534184 kB' 'Buffers: 2436 kB' 'Cached: 1595480 kB' 'SwapCached: 0 kB' 'Active: 451900 kB' 'Inactive: 1266592 kB' 'Active(anon): 131040 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1266592 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 122184 kB' 'Mapped: 48728 kB' 'Shmem: 10464 kB' 'KReclaimable: 61140 kB' 'Slab: 138568 kB' 'SReclaimable: 61140 kB' 'SUnreclaim: 77428 kB' 'KernelStack: 6256 kB' 'PageTables: 4128 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 358560 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 5064704 kB' 'DirectMap1G: 9437184 kB' 00:03:43.667 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.667 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.667 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.667 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.667 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.667 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.668 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.668 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.668 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.668 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.668 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.668 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.668 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.668 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.668 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.668 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.668 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.668 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.668 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.668 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.668 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.668 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.668 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.668 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.668 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.668 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.668 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.668 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.668 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.668 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.668 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.668 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.668 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.668 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.668 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.668 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.668 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.668 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.668 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.668 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.668 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.668 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.668 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.668 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.668 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.668 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.668 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.668 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.668 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.668 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.668 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.668 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.668 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.668 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.668 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.668 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.668 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.668 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.668 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.668 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.668 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.668 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.668 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.668 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.668 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.668 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.668 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.668 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.668 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.668 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.668 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.668 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.668 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.668 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.668 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.668 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.668 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.668 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.668 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.668 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.668 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.668 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.668 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.668 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.668 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.668 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.668 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.668 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.668 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.668 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.668 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.668 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.668 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.668 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.668 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.668 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.668 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.668 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.668 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.668 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.668 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.668 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.668 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.668 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.668 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.668 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.668 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.668 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.668 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.668 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.668 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.668 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.668 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.668 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.668 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.668 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.668 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.668 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.668 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.668 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.668 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.668 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.668 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.668 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.669 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.669 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.669 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.669 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.669 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.669 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.669 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.669 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.669 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.669 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.669 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.669 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.669 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.669 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.669 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.669 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.669 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.669 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.669 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.669 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.669 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.669 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.669 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.669 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.669 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.669 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.669 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.669 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.669 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.669 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.669 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.669 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.669 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.669 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.669 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.669 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.669 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.669 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.669 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.669 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.669 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.669 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.669 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.669 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.669 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.669 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.669 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.669 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.669 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.669 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.669 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.669 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.669 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.669 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.669 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.669 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.669 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.669 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.669 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.669 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.669 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.669 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.669 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.669 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.669 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.669 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.669 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.669 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.669 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.669 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.669 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.669 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.669 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.669 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.669 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.669 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.669 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.669 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.669 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.669 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.669 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.669 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:43.669 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:43.669 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:43.669 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:43.669 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:43.669 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:43.669 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:43.669 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:43.669 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.669 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:43.669 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:43.669 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.669 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.669 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.669 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.669 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9153100 kB' 'MemAvailable: 10534184 kB' 'Buffers: 2436 kB' 'Cached: 1595480 kB' 'SwapCached: 0 kB' 'Active: 451564 kB' 'Inactive: 1266592 kB' 'Active(anon): 130704 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1266592 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 121900 kB' 'Mapped: 48728 kB' 'Shmem: 10464 kB' 'KReclaimable: 61140 kB' 'Slab: 138568 kB' 'SReclaimable: 61140 kB' 'SUnreclaim: 77428 kB' 'KernelStack: 6256 kB' 'PageTables: 4132 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 358560 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 5064704 kB' 'DirectMap1G: 9437184 kB' 00:03:43.669 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.669 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.669 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.669 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.669 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.669 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.669 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.669 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.670 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.670 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.670 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.670 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.670 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.670 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.670 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.670 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.670 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.670 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.670 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.670 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.670 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.670 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.670 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.670 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.670 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.670 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.670 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.670 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.670 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.670 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.670 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.670 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.670 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.670 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.670 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.670 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.670 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.670 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.670 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.670 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.670 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.670 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.670 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.670 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.670 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.670 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.670 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.670 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.670 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.670 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.670 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.670 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.670 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.670 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.670 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.670 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.670 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.670 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.670 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.670 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.670 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.670 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.932 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.932 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.932 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.932 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.932 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.932 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.932 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.932 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.932 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.932 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.932 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.932 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.932 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.932 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.932 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.932 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.932 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.932 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.932 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.932 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.932 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.932 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.932 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.932 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.932 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.932 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.932 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.932 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.932 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.932 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.932 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.932 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.932 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.932 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.932 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.932 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.932 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.932 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.932 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.932 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.932 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.932 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.932 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.932 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.932 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.932 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.932 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.932 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.932 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.932 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.932 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.932 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.932 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.932 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.932 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.932 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.932 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.932 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.932 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.932 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.932 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.932 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.932 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.932 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.932 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.933 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.933 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.933 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.933 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.933 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.933 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.933 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.933 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.933 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.933 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.933 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.933 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.933 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.933 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.933 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.933 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.933 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.933 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.933 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.933 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.933 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.933 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.933 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.933 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.933 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.933 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.933 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.933 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.933 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.933 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.933 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.933 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.933 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.933 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.933 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.933 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.933 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.933 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.933 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.933 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.933 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.933 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.933 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.933 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.933 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.933 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.933 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.933 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.933 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.933 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.933 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.933 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.933 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.933 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.933 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.933 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.933 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.933 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.933 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.933 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.933 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.933 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.933 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.933 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.933 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.933 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.933 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.933 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.933 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.933 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.933 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.933 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.933 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.933 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.933 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:43.933 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:43.933 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:43.933 nr_hugepages=512 00:03:43.933 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:03:43.933 resv_hugepages=0 00:03:43.933 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:43.933 surplus_hugepages=0 00:03:43.933 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:43.933 anon_hugepages=0 00:03:43.933 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:43.933 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:43.933 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:03:43.933 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:43.933 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:43.933 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:43.933 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:43.933 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:43.933 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.933 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:43.933 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:43.933 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.933 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.933 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.933 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.933 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9153100 kB' 'MemAvailable: 10534184 kB' 'Buffers: 2436 kB' 'Cached: 1595480 kB' 'SwapCached: 0 kB' 'Active: 451824 kB' 'Inactive: 1266592 kB' 'Active(anon): 130964 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1266592 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 121900 kB' 'Mapped: 48728 kB' 'Shmem: 10464 kB' 'KReclaimable: 61140 kB' 'Slab: 138568 kB' 'SReclaimable: 61140 kB' 'SUnreclaim: 77428 kB' 'KernelStack: 6256 kB' 'PageTables: 4132 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 358560 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 5064704 kB' 'DirectMap1G: 9437184 kB' 00:03:43.933 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.933 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.933 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.933 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.933 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.934 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.934 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.934 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.934 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.934 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.934 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.934 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.934 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.934 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.934 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.934 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.934 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.934 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.934 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.934 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.934 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.934 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.934 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.934 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.934 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.934 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.934 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.934 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.934 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.934 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.934 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.934 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.934 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.934 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.934 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.934 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.934 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.934 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.934 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.934 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.934 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.934 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.934 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.934 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.934 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.934 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.934 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.934 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.934 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.934 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.934 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.934 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.934 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.934 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.934 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.934 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.934 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.934 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.934 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.934 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.934 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.934 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.934 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.934 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.934 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.934 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.934 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.934 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.934 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.934 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.934 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.934 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.934 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.934 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.934 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.934 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.934 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.934 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.934 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.934 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.934 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.934 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.934 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.934 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.934 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.934 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.934 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.934 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.934 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.934 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.934 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.934 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.934 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.934 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.934 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.934 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.934 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.934 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.934 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.934 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.934 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.934 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.934 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.934 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.934 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.934 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.934 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.934 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.934 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.934 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.934 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.934 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.934 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.934 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.934 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.934 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.934 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.934 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.934 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.934 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.934 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.934 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.934 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.935 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.935 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.935 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.935 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.935 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.935 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.935 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.935 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.935 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.935 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.935 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.935 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.935 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.935 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.935 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.935 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.935 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.935 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.935 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.935 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.935 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.935 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.935 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.935 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.935 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.935 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.935 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.935 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.935 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.935 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.935 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.935 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.935 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.935 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.935 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.935 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.935 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.935 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.935 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.935 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.935 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.935 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.935 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.935 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.935 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.935 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.935 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.935 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.935 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.935 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.935 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.935 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.935 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.935 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.935 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.935 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.935 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.935 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.935 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.935 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.935 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.935 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.935 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.935 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.935 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.935 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.935 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.935 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.935 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.935 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.935 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 512 00:03:43.935 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:43.935 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:43.935 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:43.935 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:43.935 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:43.935 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:43.935 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:43.935 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:43.935 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:43.935 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:43.935 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:43.935 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:43.935 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:43.935 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:43.935 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:43.935 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.935 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:43.935 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:43.935 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.935 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.935 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.935 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9153176 kB' 'MemUsed: 3088796 kB' 'SwapCached: 0 kB' 'Active: 451708 kB' 'Inactive: 1266592 kB' 'Active(anon): 130848 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1266592 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'FilePages: 1597916 kB' 'Mapped: 48728 kB' 'AnonPages: 122044 kB' 'Shmem: 10464 kB' 'KernelStack: 6224 kB' 'PageTables: 4036 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61140 kB' 'Slab: 138560 kB' 'SReclaimable: 61140 kB' 'SUnreclaim: 77420 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:43.935 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.935 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.935 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.935 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.935 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.935 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.935 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.935 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.935 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.935 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.935 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.935 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.935 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.935 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.935 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.936 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.936 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.936 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.936 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.936 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.936 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.936 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.936 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.936 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.936 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.936 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.936 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.936 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.936 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.936 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.936 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.936 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.936 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.936 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.936 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.936 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.936 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.936 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.936 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.936 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.936 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.936 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.936 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.936 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.936 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.936 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.936 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.936 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.936 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.936 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.936 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.936 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.936 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.936 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.936 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.936 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.936 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.936 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.936 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.936 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.936 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.936 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.936 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.936 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.936 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.936 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.936 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.936 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.936 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.936 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.936 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.936 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.936 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.936 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.936 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.936 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.936 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.936 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.936 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.936 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.936 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.936 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.936 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.936 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.936 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.936 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.936 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.936 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.936 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.936 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.936 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.936 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.936 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.936 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.936 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.936 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.936 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.936 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.936 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.936 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.936 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.936 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.936 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.936 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.936 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.936 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.936 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.936 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.936 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.936 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.936 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.936 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.936 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.936 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.936 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.936 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.936 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.936 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.936 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.936 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.936 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.936 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.936 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.936 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.936 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.936 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.936 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.936 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.936 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.937 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.937 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.937 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.937 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.937 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.937 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.937 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.937 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.937 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.937 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.937 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.937 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.937 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.937 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.937 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.937 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.937 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.937 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:43.937 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:43.937 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:43.937 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:43.937 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:43.937 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:43.937 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:43.937 node0=512 expecting 512 00:03:43.937 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:43.937 00:03:43.937 real 0m0.518s 00:03:43.937 user 0m0.267s 00:03:43.937 sys 0m0.285s 00:03:43.937 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:43.937 21:25:28 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:43.937 ************************************ 00:03:43.937 END TEST per_node_1G_alloc 00:03:43.937 ************************************ 00:03:43.937 21:25:28 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:43.937 21:25:28 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:43.937 21:25:28 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:43.937 21:25:28 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:43.937 ************************************ 00:03:43.937 START TEST even_2G_alloc 00:03:43.937 ************************************ 00:03:43.937 21:25:28 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1125 -- # even_2G_alloc 00:03:43.937 21:25:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:43.937 21:25:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:43.937 21:25:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:43.937 21:25:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:43.937 21:25:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:43.937 21:25:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:43.937 21:25:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:43.937 21:25:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:43.937 21:25:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:43.937 21:25:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:43.937 21:25:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:43.937 21:25:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:43.937 21:25:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:43.937 21:25:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:43.937 21:25:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:43.937 21:25:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:03:43.937 21:25:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:43.937 21:25:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:43.937 21:25:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:43.937 21:25:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:43.937 21:25:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:43.937 21:25:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:43.937 21:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:43.937 21:25:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:44.197 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:44.197 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:44.197 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:44.197 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:44.197 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:44.197 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:44.197 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:44.197 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:44.197 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:44.197 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:44.197 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:44.197 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:44.197 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:44.197 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:44.197 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:44.197 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:44.197 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.197 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.197 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.197 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.197 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.197 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.197 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.197 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8102848 kB' 'MemAvailable: 9483932 kB' 'Buffers: 2436 kB' 'Cached: 1595480 kB' 'SwapCached: 0 kB' 'Active: 452540 kB' 'Inactive: 1266592 kB' 'Active(anon): 131680 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1266592 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 122752 kB' 'Mapped: 48728 kB' 'Shmem: 10464 kB' 'KReclaimable: 61140 kB' 'Slab: 138544 kB' 'SReclaimable: 61140 kB' 'SUnreclaim: 77404 kB' 'KernelStack: 6304 kB' 'PageTables: 4232 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 358688 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 5064704 kB' 'DirectMap1G: 9437184 kB' 00:03:44.197 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.197 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.197 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.197 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.197 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.197 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.197 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.197 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.197 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.197 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.197 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.197 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.197 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.197 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.197 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.197 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.197 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.197 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.197 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.197 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.197 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.197 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.197 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.197 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.197 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.197 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.197 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.197 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.197 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.197 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.197 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.197 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.197 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.197 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.197 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.197 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.197 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.197 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.197 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.197 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.197 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.197 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.197 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.197 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.198 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.198 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.198 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.198 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.198 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.198 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.198 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.198 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.198 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.198 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.198 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.198 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.198 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.198 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.198 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.198 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.198 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.198 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.198 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.198 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.198 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.198 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.198 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.198 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.198 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.198 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.198 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.198 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.198 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.198 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.198 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.198 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.198 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.198 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.198 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.198 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.198 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.198 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.198 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.198 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.198 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.198 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.198 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.198 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.198 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.198 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.198 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.198 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.198 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.198 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.198 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.198 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.198 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.198 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.198 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.198 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.198 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.198 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.198 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.198 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.198 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.198 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.198 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.198 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.198 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.198 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.198 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.198 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.198 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.198 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.198 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.198 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.198 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.198 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.198 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.198 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.198 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.198 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.198 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.198 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.198 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.198 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.198 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.198 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.198 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.461 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.461 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.461 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.461 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.461 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.461 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.461 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.461 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.461 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.461 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.461 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.461 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.461 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.461 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.461 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.461 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.461 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.461 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.462 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.462 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.462 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.462 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.462 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.462 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.462 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.462 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.462 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.462 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.462 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.462 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.462 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.462 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.462 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:44.462 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:44.462 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:44.462 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:44.462 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:44.462 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:44.462 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:44.462 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:44.462 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.462 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.462 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.462 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.462 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.462 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.462 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8102372 kB' 'MemAvailable: 9483456 kB' 'Buffers: 2436 kB' 'Cached: 1595480 kB' 'SwapCached: 0 kB' 'Active: 451952 kB' 'Inactive: 1266592 kB' 'Active(anon): 131092 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1266592 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 122220 kB' 'Mapped: 48728 kB' 'Shmem: 10464 kB' 'KReclaimable: 61140 kB' 'Slab: 138572 kB' 'SReclaimable: 61140 kB' 'SUnreclaim: 77432 kB' 'KernelStack: 6288 kB' 'PageTables: 4224 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 358688 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 5064704 kB' 'DirectMap1G: 9437184 kB' 00:03:44.462 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.462 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.462 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.462 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.462 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.462 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.462 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.462 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.462 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.462 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.462 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.462 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.462 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.462 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.462 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.462 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.462 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.462 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.462 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.462 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.462 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.462 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.462 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.462 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.462 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.462 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.462 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.462 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.462 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.462 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.462 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.462 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.462 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.462 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.462 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.462 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.462 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.462 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.462 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.462 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.462 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.462 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.462 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.462 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.462 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.462 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.462 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.462 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.462 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.462 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.462 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.462 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.462 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.462 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.462 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.462 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.462 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.462 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.462 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.462 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.462 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.462 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.462 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.462 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.462 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.462 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.462 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.462 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.462 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.462 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.462 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.462 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.462 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.462 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.462 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.462 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.462 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.462 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.462 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.463 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.463 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.463 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.463 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.463 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.463 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.463 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.463 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.463 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.463 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.463 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.463 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.463 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.463 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.463 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.463 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.463 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.463 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.463 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.463 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.463 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.463 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.463 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.463 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.463 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.463 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.463 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.463 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.463 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.463 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.463 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.463 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.463 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.463 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.463 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.463 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.463 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.463 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.463 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.463 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.463 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.463 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.463 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.463 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.463 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.463 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.463 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.463 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.463 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.463 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.463 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.463 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.463 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.463 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.463 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.463 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.463 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.463 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.463 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.463 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.463 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.463 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.463 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.463 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.463 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.463 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.463 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.463 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.463 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.463 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.463 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.463 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.463 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.463 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.463 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.463 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.463 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.463 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.463 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.463 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.463 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.463 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.463 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.463 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.463 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.463 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.463 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.463 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.463 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.463 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.463 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.463 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.463 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.463 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.463 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.463 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.463 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.463 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.463 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.463 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.463 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.463 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.463 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.463 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.463 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.463 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.463 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.463 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.463 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.463 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.463 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.463 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.463 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.463 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.463 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.463 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.463 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.463 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.463 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.463 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.463 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.463 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.464 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.464 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.464 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.464 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.464 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.464 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:44.464 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:44.464 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:44.464 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:44.464 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:44.464 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:44.464 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:44.464 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:44.464 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.464 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.464 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.464 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.464 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.464 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.464 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.464 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8102372 kB' 'MemAvailable: 9483456 kB' 'Buffers: 2436 kB' 'Cached: 1595480 kB' 'SwapCached: 0 kB' 'Active: 451916 kB' 'Inactive: 1266592 kB' 'Active(anon): 131056 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1266592 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 122212 kB' 'Mapped: 48728 kB' 'Shmem: 10464 kB' 'KReclaimable: 61140 kB' 'Slab: 138572 kB' 'SReclaimable: 61140 kB' 'SUnreclaim: 77432 kB' 'KernelStack: 6272 kB' 'PageTables: 4180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 358688 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 5064704 kB' 'DirectMap1G: 9437184 kB' 00:03:44.464 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.464 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.464 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.464 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.464 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.464 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.464 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.464 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.464 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.464 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.464 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.464 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.464 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.464 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.464 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.464 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.464 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.464 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.464 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.464 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.464 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.464 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.464 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.464 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.464 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.464 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.464 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.464 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.464 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.464 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.464 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.464 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.464 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.464 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.464 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.464 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.464 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.464 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.464 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.464 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.464 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.464 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.464 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.464 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.464 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.464 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.464 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.464 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.464 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.464 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.464 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.464 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.464 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.464 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.464 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.464 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.464 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.464 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.464 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.464 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.464 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.464 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.464 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.464 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.464 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.464 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.464 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.464 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.464 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.464 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.464 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.464 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.464 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.464 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.464 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.464 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.464 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.464 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.464 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.464 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.464 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.464 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.464 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.464 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.464 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.464 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.464 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.464 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.465 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.465 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.465 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.465 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.465 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.465 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.465 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.465 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.465 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.465 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.465 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.465 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.465 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.465 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.465 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.465 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.465 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.465 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.465 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.465 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.465 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.465 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.465 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.465 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.465 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.465 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.465 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.465 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.465 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.465 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.465 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.465 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.465 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.465 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.465 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.465 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.465 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.465 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.465 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.465 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.465 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.465 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.465 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.465 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.465 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.465 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.465 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.465 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.465 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.465 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.465 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.465 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.465 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.465 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.465 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.465 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.465 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.465 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.465 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.465 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.465 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.465 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.465 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.465 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.465 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.465 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.465 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.465 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.465 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.465 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.465 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.465 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.465 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.465 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.465 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.465 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.465 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.465 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.465 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.465 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.465 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.465 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.465 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.465 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.465 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.465 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.465 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.465 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.465 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.465 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.465 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.465 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.465 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.465 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.465 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.465 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.465 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.465 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.465 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.465 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.465 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.465 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.465 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.465 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.465 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.465 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.465 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.465 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.465 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.465 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.465 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.465 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.465 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.465 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:44.465 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:44.465 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:44.465 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:44.465 nr_hugepages=1024 00:03:44.465 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:44.465 resv_hugepages=0 00:03:44.465 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:44.465 surplus_hugepages=0 00:03:44.465 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:44.465 anon_hugepages=0 00:03:44.466 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:44.466 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:44.466 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:44.466 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:44.466 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:44.466 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:44.466 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:44.466 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.466 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.466 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.466 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.466 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.466 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.466 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8102372 kB' 'MemAvailable: 9483456 kB' 'Buffers: 2436 kB' 'Cached: 1595480 kB' 'SwapCached: 0 kB' 'Active: 451828 kB' 'Inactive: 1266592 kB' 'Active(anon): 130968 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1266592 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 122100 kB' 'Mapped: 48728 kB' 'Shmem: 10464 kB' 'KReclaimable: 61140 kB' 'Slab: 138572 kB' 'SReclaimable: 61140 kB' 'SUnreclaim: 77432 kB' 'KernelStack: 6256 kB' 'PageTables: 4128 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 358688 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 5064704 kB' 'DirectMap1G: 9437184 kB' 00:03:44.466 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.466 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.466 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.466 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.466 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.466 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.466 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.466 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.466 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.466 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.466 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.466 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.466 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.466 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.466 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.466 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.466 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.466 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.466 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.466 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.466 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.466 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.466 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.466 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.466 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.466 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.466 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.466 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.466 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.466 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.466 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.466 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.466 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.466 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.466 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.466 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.466 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.466 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.466 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.466 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.466 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.466 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.466 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.466 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.466 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.466 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.466 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.466 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.466 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.466 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.466 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.466 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.466 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.466 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.466 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.466 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.466 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.466 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.466 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.466 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.466 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.466 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.466 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.466 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.466 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.466 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.466 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.466 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.466 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.466 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.466 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.466 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.466 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.467 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.467 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.467 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.467 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.467 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.467 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.467 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.467 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.467 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.467 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.467 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.467 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.467 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.467 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.467 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.467 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.467 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.467 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.467 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.467 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.467 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.467 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.467 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.467 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.467 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.467 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.467 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.467 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.467 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.467 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.467 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.467 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.467 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.467 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.467 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.467 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.467 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.467 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.467 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.467 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.467 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.467 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.467 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.467 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.467 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.467 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.467 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.467 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.467 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.467 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.467 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.467 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.467 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.467 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.467 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.467 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.467 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.467 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.467 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.467 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.467 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.467 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.467 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.467 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.467 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.467 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.467 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.467 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.467 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.467 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.467 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.467 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.467 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.467 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.467 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.467 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.467 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.467 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.467 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.467 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.467 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.467 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.467 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.467 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.467 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.467 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.467 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.467 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.467 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.467 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.467 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.467 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.467 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.467 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.467 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.467 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.467 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.467 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.467 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.467 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.467 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.467 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.467 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.467 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.467 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.467 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.467 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.467 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.467 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.467 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.467 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.467 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.467 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.467 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.467 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.467 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.467 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.467 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.467 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.467 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.467 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.468 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:44.468 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:44.468 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:44.468 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:44.468 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:44.468 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:44.468 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:44.468 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:44.468 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:44.468 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:44.468 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:44.468 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:44.468 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:44.468 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:44.468 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:44.468 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:44.468 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.468 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:44.468 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:44.468 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.468 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.468 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.468 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8102372 kB' 'MemUsed: 4139600 kB' 'SwapCached: 0 kB' 'Active: 451832 kB' 'Inactive: 1266592 kB' 'Active(anon): 130972 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1266592 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'FilePages: 1597916 kB' 'Mapped: 48728 kB' 'AnonPages: 122100 kB' 'Shmem: 10464 kB' 'KernelStack: 6256 kB' 'PageTables: 4128 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61140 kB' 'Slab: 138572 kB' 'SReclaimable: 61140 kB' 'SUnreclaim: 77432 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:44.468 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.468 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.468 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.468 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.468 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.468 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.468 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.468 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.468 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.468 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.468 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.468 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.468 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.468 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.468 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.468 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.468 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.468 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.468 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.468 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.468 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.468 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.468 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.468 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.468 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.468 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.468 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.468 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.468 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.468 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.468 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.468 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.468 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.468 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.468 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.468 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.468 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.468 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.468 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.468 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.468 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.468 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.468 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.468 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.468 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.468 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.468 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.468 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.468 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.468 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.468 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.468 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.468 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.468 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.468 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.468 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.468 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.468 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.468 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.468 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.468 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.468 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.468 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.468 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.468 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.468 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.468 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.468 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.468 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.468 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.468 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.468 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.468 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.468 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.468 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.468 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.468 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.468 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.468 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.468 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.468 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.468 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.468 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.468 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.468 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.468 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.468 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.468 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.468 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.469 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.469 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.469 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.469 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.469 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.469 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.469 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.469 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.469 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.469 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.469 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.469 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.469 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.469 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.469 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.469 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.469 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.469 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.469 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.469 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.469 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.469 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.469 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.469 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.469 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.469 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.469 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.469 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.469 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.469 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.469 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.469 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.469 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.469 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.469 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.469 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.469 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.469 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.469 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.469 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.469 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.469 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.469 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.469 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.469 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.469 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.469 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.469 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.469 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.469 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.469 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.469 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.469 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.469 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.469 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.469 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.469 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.469 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:44.469 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:44.469 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:44.469 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:44.469 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:44.469 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:44.469 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:44.469 node0=1024 expecting 1024 00:03:44.469 ************************************ 00:03:44.469 END TEST even_2G_alloc 00:03:44.469 ************************************ 00:03:44.469 21:25:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:44.469 00:03:44.469 real 0m0.583s 00:03:44.469 user 0m0.281s 00:03:44.469 sys 0m0.284s 00:03:44.469 21:25:29 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:44.469 21:25:29 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:44.469 21:25:29 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:44.469 21:25:29 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:44.469 21:25:29 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:44.469 21:25:29 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:44.469 ************************************ 00:03:44.469 START TEST odd_alloc 00:03:44.469 ************************************ 00:03:44.469 21:25:29 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1125 -- # odd_alloc 00:03:44.469 21:25:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:44.469 21:25:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:03:44.469 21:25:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:44.469 21:25:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:44.469 21:25:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:44.469 21:25:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:44.469 21:25:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:44.469 21:25:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:44.469 21:25:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:44.469 21:25:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:44.469 21:25:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:44.469 21:25:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:44.469 21:25:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:44.469 21:25:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:44.469 21:25:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:44.469 21:25:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:03:44.469 21:25:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:44.469 21:25:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:44.469 21:25:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:44.469 21:25:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:44.469 21:25:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:44.469 21:25:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:03:44.469 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:44.469 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:45.042 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:45.042 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:45.042 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:45.042 21:25:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:45.042 21:25:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:03:45.042 21:25:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:45.042 21:25:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:45.042 21:25:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:45.042 21:25:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:45.042 21:25:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:45.042 21:25:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:45.042 21:25:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:45.042 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:45.042 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:45.042 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:45.042 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:45.042 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.042 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.042 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.042 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.042 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.042 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.042 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8097920 kB' 'MemAvailable: 9479004 kB' 'Buffers: 2436 kB' 'Cached: 1595480 kB' 'SwapCached: 0 kB' 'Active: 452000 kB' 'Inactive: 1266592 kB' 'Active(anon): 131140 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1266592 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 122296 kB' 'Mapped: 48924 kB' 'Shmem: 10464 kB' 'KReclaimable: 61140 kB' 'Slab: 138604 kB' 'SReclaimable: 61140 kB' 'SUnreclaim: 77464 kB' 'KernelStack: 6312 kB' 'PageTables: 4176 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 358688 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 5064704 kB' 'DirectMap1G: 9437184 kB' 00:03:45.042 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.042 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.042 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.042 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.042 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.042 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.042 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.042 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.042 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.042 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.042 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.042 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.042 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.042 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.042 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.042 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.042 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.042 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.042 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.042 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.042 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.042 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.042 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.042 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.042 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.042 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.042 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.042 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.042 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.042 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.042 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.042 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.042 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.043 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.043 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.043 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.043 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.043 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.043 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.043 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.043 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.043 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.043 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.043 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.043 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.043 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.043 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.043 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.043 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.043 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.043 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.043 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.043 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.043 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.043 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.043 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.043 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.043 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.043 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.043 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.043 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.043 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.043 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.043 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.043 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.043 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.043 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.043 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.043 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.043 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.043 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.043 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.043 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.043 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.043 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.043 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.043 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.043 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.043 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.043 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.043 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.043 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.043 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.043 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.043 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.043 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.043 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.043 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.043 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.043 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.043 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.043 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.043 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.043 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.043 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.043 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.043 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.043 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.043 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.043 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.043 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.043 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.043 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.043 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.043 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.043 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.043 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.043 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.043 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.043 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.043 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.043 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.043 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.043 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.043 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.043 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.043 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.043 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.043 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.043 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.043 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.043 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.043 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.043 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.043 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.043 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.043 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.043 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.043 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.043 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.043 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.043 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.043 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.043 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.043 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.043 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.043 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.043 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.043 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.043 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.043 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.043 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.043 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.043 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.043 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.043 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.043 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.043 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.043 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.043 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.043 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.043 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.043 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.043 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.043 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.044 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.044 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.044 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.044 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.044 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.044 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.044 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.044 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:45.044 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:45.044 21:25:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:45.044 21:25:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:45.044 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:45.044 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:45.044 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:45.044 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:45.044 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.044 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.044 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.044 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.044 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.044 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.044 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8097920 kB' 'MemAvailable: 9479004 kB' 'Buffers: 2436 kB' 'Cached: 1595480 kB' 'SwapCached: 0 kB' 'Active: 452048 kB' 'Inactive: 1266592 kB' 'Active(anon): 131188 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1266592 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 122336 kB' 'Mapped: 48728 kB' 'Shmem: 10464 kB' 'KReclaimable: 61140 kB' 'Slab: 138604 kB' 'SReclaimable: 61140 kB' 'SUnreclaim: 77464 kB' 'KernelStack: 6272 kB' 'PageTables: 4180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 358688 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 5064704 kB' 'DirectMap1G: 9437184 kB' 00:03:45.044 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.044 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.044 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.044 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.044 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.044 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.044 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.044 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.044 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.044 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.044 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.044 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.044 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.044 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.044 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.044 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.044 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.044 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.044 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.044 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.044 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.044 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.044 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.044 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.044 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.044 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.044 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.044 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.044 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.044 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.044 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.044 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.044 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.044 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.044 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.044 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.044 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.044 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.044 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.044 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.044 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.044 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.044 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.044 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.044 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.044 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.044 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.044 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.044 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.044 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.044 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.044 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.044 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.044 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.044 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.044 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.044 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.044 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.044 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.044 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.044 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.044 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.044 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.044 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.044 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.044 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.044 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.044 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.044 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.044 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.044 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.044 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.044 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.044 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.044 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.044 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.044 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.044 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.044 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.044 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.044 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.044 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.044 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.044 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.044 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.044 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.044 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.044 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.044 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.045 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.045 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.045 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.045 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.045 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.045 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.045 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.045 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.045 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.045 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.045 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.045 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.045 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.045 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.045 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.045 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.045 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.045 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.045 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.045 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.045 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.045 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.045 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.045 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.045 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.045 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.045 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.045 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.045 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.045 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.045 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.045 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.045 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.045 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.045 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.045 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.045 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.045 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.045 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.045 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.045 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.045 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.045 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.045 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.045 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.045 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.045 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.045 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.045 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.045 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.045 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.045 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.045 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.045 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.045 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.045 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.045 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.045 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.045 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.045 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.045 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.045 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.045 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.045 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.045 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.045 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.045 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.045 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.045 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.045 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.045 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.045 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.045 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.045 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.045 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.045 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.045 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.045 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.045 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.045 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.045 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.045 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.045 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.045 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.045 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.045 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.045 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.045 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.045 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.045 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.045 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.045 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.045 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.045 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.045 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.045 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.045 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.045 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.045 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.045 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.045 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.045 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.045 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.045 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.045 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.045 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.045 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.045 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.045 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.045 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.045 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.045 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.045 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.045 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.045 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.045 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.045 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.045 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:45.045 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:45.045 21:25:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:45.045 21:25:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:45.045 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:45.045 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:45.045 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:45.045 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:45.045 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.046 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.046 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.046 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.046 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.046 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.046 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.046 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8097920 kB' 'MemAvailable: 9479004 kB' 'Buffers: 2436 kB' 'Cached: 1595480 kB' 'SwapCached: 0 kB' 'Active: 451596 kB' 'Inactive: 1266592 kB' 'Active(anon): 130736 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1266592 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 122100 kB' 'Mapped: 48728 kB' 'Shmem: 10464 kB' 'KReclaimable: 61140 kB' 'Slab: 138608 kB' 'SReclaimable: 61140 kB' 'SUnreclaim: 77468 kB' 'KernelStack: 6256 kB' 'PageTables: 4128 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 358688 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 5064704 kB' 'DirectMap1G: 9437184 kB' 00:03:45.046 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.046 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.046 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.046 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.046 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.046 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.046 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.046 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.046 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.046 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.046 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.046 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.046 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.046 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.046 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.046 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.046 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.046 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.046 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.046 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.046 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.046 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.046 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.046 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.046 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.046 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.046 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.046 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.046 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.046 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.046 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.046 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.046 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.046 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.046 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.046 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.046 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.046 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.046 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.046 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.046 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.046 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.046 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.046 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.046 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.046 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.046 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.046 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.046 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.046 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.046 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.046 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.046 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.046 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.046 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.046 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.046 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.046 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.046 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.046 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.046 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.046 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.046 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.046 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.046 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.046 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.046 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.046 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.046 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.046 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.046 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.046 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.046 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.046 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.046 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.046 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.046 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.046 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.046 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.046 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.046 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.046 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.046 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.046 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.046 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.046 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.046 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.046 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.046 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.046 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.046 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.046 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.046 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.046 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.046 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.046 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.046 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.046 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.046 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.046 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.046 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.046 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.046 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.046 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.046 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.047 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.047 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.047 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.047 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.047 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.047 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.047 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.047 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.047 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.047 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.047 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.047 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.047 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.047 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.047 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.047 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.047 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.047 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.047 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.047 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.047 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.047 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.047 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.047 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.047 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.047 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.047 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.047 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.047 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.047 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.047 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.047 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.047 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.047 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.047 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.047 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.047 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.047 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.047 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.047 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.047 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.047 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.047 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.047 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.047 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.047 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.047 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.047 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.047 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.047 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.047 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.047 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.047 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.047 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.047 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.047 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.047 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.047 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.047 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.047 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.047 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.047 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.047 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.047 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.047 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.047 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.047 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.047 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.047 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.047 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.047 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.047 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.047 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.047 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.047 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.047 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.047 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.047 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.047 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.047 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.047 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.047 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.047 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.047 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.047 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.047 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.047 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.047 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.047 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.047 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.047 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.047 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.047 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.047 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.047 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.047 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.047 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:45.047 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:45.048 21:25:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:45.048 21:25:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:45.048 nr_hugepages=1025 00:03:45.048 resv_hugepages=0 00:03:45.048 surplus_hugepages=0 00:03:45.048 anon_hugepages=0 00:03:45.048 21:25:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:45.048 21:25:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:45.048 21:25:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:45.048 21:25:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:45.048 21:25:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:45.048 21:25:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:45.048 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:45.048 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:45.048 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:45.048 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:45.048 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.048 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.048 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.048 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.048 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.048 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.048 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8097920 kB' 'MemAvailable: 9479004 kB' 'Buffers: 2436 kB' 'Cached: 1595480 kB' 'SwapCached: 0 kB' 'Active: 451864 kB' 'Inactive: 1266592 kB' 'Active(anon): 131004 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1266592 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 122108 kB' 'Mapped: 48728 kB' 'Shmem: 10464 kB' 'KReclaimable: 61140 kB' 'Slab: 138608 kB' 'SReclaimable: 61140 kB' 'SUnreclaim: 77468 kB' 'KernelStack: 6256 kB' 'PageTables: 4128 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 358688 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 5064704 kB' 'DirectMap1G: 9437184 kB' 00:03:45.048 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.048 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.048 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.048 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.048 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.048 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.048 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.048 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.048 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.048 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.048 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.048 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.048 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.048 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.048 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.048 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.048 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.048 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.048 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.048 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.048 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.048 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.048 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.048 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.048 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.048 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.048 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.048 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.048 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.048 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.048 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.048 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.048 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.048 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.048 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.048 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.048 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.048 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.048 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.048 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.048 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.048 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.048 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.048 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.048 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.048 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.048 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.048 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.048 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.048 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.048 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.048 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.048 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.048 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.048 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.048 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.048 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.048 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.048 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.048 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.048 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.048 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.048 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.048 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.048 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.048 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.048 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.048 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.048 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.048 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.048 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.048 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.048 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.048 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.048 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.048 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.048 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.048 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.048 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.048 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.048 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.048 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.048 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.048 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.048 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.048 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.048 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.048 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.048 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.048 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.049 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.049 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.049 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.049 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.049 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.049 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.049 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.049 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.049 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.049 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.049 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.049 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.049 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.049 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.049 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.049 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.049 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.049 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.049 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.049 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.049 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.049 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.049 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.049 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.049 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.049 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.049 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.049 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.049 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.049 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.049 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.049 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.049 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.049 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.049 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.049 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.049 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.049 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.049 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.049 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.049 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.049 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.049 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.049 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.049 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.049 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.049 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.049 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.049 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.049 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.049 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.049 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.049 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.049 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.049 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.049 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.049 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.049 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.049 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.049 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.049 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.049 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.049 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.049 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.049 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.049 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.049 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.049 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.049 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.049 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.049 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.049 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.049 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.049 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.049 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.049 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.049 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.049 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.049 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.049 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.049 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.049 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.049 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.049 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.049 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.049 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.049 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.049 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.049 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.049 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.049 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.049 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.049 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.049 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.049 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.049 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.049 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.049 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.049 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.049 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.049 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.049 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.049 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.049 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.049 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:45.049 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:45.049 21:25:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:45.049 21:25:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:45.049 21:25:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:03:45.049 21:25:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:45.049 21:25:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:03:45.049 21:25:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:45.049 21:25:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:45.049 21:25:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:45.049 21:25:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:45.049 21:25:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:45.049 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:45.049 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:45.049 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:45.049 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:45.049 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.049 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:45.049 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:45.049 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.050 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.050 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.050 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.050 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8097920 kB' 'MemUsed: 4144052 kB' 'SwapCached: 0 kB' 'Active: 451596 kB' 'Inactive: 1266592 kB' 'Active(anon): 130736 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1266592 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'FilePages: 1597916 kB' 'Mapped: 48728 kB' 'AnonPages: 122100 kB' 'Shmem: 10464 kB' 'KernelStack: 6256 kB' 'PageTables: 4128 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61140 kB' 'Slab: 138604 kB' 'SReclaimable: 61140 kB' 'SUnreclaim: 77464 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:03:45.050 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.050 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.050 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.050 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.050 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.050 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.050 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.050 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.050 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.050 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.050 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.050 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.050 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.050 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.050 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.050 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.050 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.050 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.050 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.050 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.050 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.050 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.050 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.050 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.050 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.050 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.050 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.050 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.050 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.050 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.050 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.050 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.050 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.050 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.050 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.050 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.050 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.050 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.050 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.050 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.050 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.050 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.050 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.050 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.050 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.050 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.050 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.050 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.050 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.050 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.050 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.050 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.050 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.050 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.050 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.050 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.050 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.050 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.050 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.050 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.050 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.050 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.050 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.050 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.050 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.050 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.050 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.050 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.050 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.050 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.050 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.050 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.050 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.050 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.050 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.050 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.050 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.050 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.050 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.050 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.050 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.050 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.050 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.050 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.050 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.050 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.050 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.050 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.050 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.050 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.050 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.050 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.050 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.050 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.050 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.050 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.050 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.050 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.050 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.050 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.050 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.050 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.050 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.050 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.050 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.050 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.050 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.050 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.050 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.050 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.050 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.050 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.051 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.051 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.051 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.051 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.051 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.051 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.051 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.051 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.051 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.051 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.051 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.051 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.051 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.051 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.051 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.051 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.051 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.051 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.051 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.051 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.051 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.051 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.051 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.051 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.051 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.051 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.051 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.051 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.051 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.051 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.051 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.051 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.051 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.051 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:45.051 21:25:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:45.051 21:25:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:45.051 21:25:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:45.051 21:25:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:45.051 21:25:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:45.051 node0=1025 expecting 1025 00:03:45.051 21:25:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:03:45.051 21:25:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:03:45.051 00:03:45.051 real 0m0.572s 00:03:45.051 user 0m0.291s 00:03:45.051 sys 0m0.285s 00:03:45.051 ************************************ 00:03:45.051 END TEST odd_alloc 00:03:45.051 ************************************ 00:03:45.051 21:25:29 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:45.051 21:25:29 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:45.051 21:25:30 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:45.051 21:25:30 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:45.051 21:25:30 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:45.051 21:25:30 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:45.310 ************************************ 00:03:45.310 START TEST custom_alloc 00:03:45.310 ************************************ 00:03:45.310 21:25:30 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1125 -- # custom_alloc 00:03:45.310 21:25:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:03:45.310 21:25:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:03:45.310 21:25:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:45.310 21:25:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:45.310 21:25:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:45.310 21:25:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:45.310 21:25:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:45.310 21:25:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:45.310 21:25:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:45.310 21:25:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:45.310 21:25:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:45.310 21:25:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:45.310 21:25:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:45.310 21:25:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:45.310 21:25:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:45.310 21:25:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:45.310 21:25:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:45.310 21:25:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:45.310 21:25:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:45.310 21:25:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:45.310 21:25:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:45.310 21:25:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:45.310 21:25:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:45.310 21:25:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:45.310 21:25:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:45.310 21:25:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:03:45.310 21:25:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:45.310 21:25:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:45.310 21:25:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:45.310 21:25:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:45.310 21:25:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:45.310 21:25:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:45.310 21:25:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:45.310 21:25:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:45.310 21:25:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:45.310 21:25:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:45.310 21:25:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:45.310 21:25:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:45.310 21:25:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:45.310 21:25:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:45.310 21:25:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:45.310 21:25:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:03:45.310 21:25:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:03:45.310 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:45.310 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:45.573 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:45.573 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:45.573 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:45.573 21:25:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:03:45.573 21:25:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:45.573 21:25:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:03:45.573 21:25:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:45.573 21:25:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:45.573 21:25:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:45.573 21:25:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:45.573 21:25:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:45.573 21:25:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:45.573 21:25:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:45.573 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:45.573 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:45.573 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:45.573 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:45.573 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.573 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.573 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.573 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.573 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.573 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.573 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.573 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9152664 kB' 'MemAvailable: 10533748 kB' 'Buffers: 2436 kB' 'Cached: 1595480 kB' 'SwapCached: 0 kB' 'Active: 452572 kB' 'Inactive: 1266592 kB' 'Active(anon): 131712 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1266592 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 123048 kB' 'Mapped: 48980 kB' 'Shmem: 10464 kB' 'KReclaimable: 61140 kB' 'Slab: 138608 kB' 'SReclaimable: 61140 kB' 'SUnreclaim: 77468 kB' 'KernelStack: 6240 kB' 'PageTables: 4084 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 358688 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 5064704 kB' 'DirectMap1G: 9437184 kB' 00:03:45.573 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.573 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.573 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.573 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.573 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.573 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.573 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.573 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.573 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.573 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.573 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.573 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.573 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.573 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.573 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.573 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.573 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.573 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.573 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.573 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.573 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.573 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.573 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.573 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.573 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.573 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.573 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.573 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.574 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.574 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.574 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.574 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.574 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.574 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.574 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.574 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.574 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.574 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.574 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.574 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.574 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.574 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.574 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.574 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.574 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.574 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.574 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.574 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.574 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.574 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.574 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.574 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.574 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.574 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.574 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.574 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.574 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.574 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.574 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.574 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.574 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.574 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.574 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.574 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.574 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.574 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.574 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.574 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.574 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.574 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.574 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.574 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.574 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.574 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.574 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.574 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.574 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.574 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.574 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.574 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.574 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.574 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.574 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.574 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.574 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.574 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.574 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.574 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.574 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.574 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.574 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.574 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.574 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.574 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.574 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.574 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.574 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.574 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.574 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.574 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.574 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.574 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.574 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.574 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.574 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.574 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.574 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.574 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.574 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.574 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.574 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.574 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.574 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.574 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.574 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.574 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.574 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.574 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.574 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.574 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.574 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.574 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.574 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.574 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.574 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.574 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.574 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.574 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.574 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.574 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.574 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.574 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.574 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.574 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.574 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.574 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.574 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.574 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.574 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.574 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.574 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.574 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.574 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.574 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.574 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.574 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.574 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.574 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.575 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.575 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.575 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.575 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.575 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.575 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.575 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.575 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.575 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.575 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.575 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.575 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.575 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.575 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:45.575 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:45.575 21:25:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:45.575 21:25:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:45.575 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:45.575 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:45.575 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:45.575 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:45.575 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.575 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.575 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.575 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.575 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.575 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.575 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.575 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9152664 kB' 'MemAvailable: 10533748 kB' 'Buffers: 2436 kB' 'Cached: 1595480 kB' 'SwapCached: 0 kB' 'Active: 451688 kB' 'Inactive: 1266592 kB' 'Active(anon): 130828 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1266592 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 122220 kB' 'Mapped: 48728 kB' 'Shmem: 10464 kB' 'KReclaimable: 61140 kB' 'Slab: 138612 kB' 'SReclaimable: 61140 kB' 'SUnreclaim: 77472 kB' 'KernelStack: 6272 kB' 'PageTables: 4184 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 358688 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 5064704 kB' 'DirectMap1G: 9437184 kB' 00:03:45.575 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.575 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.575 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.575 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.575 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.575 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.575 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.575 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.575 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.575 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.575 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.575 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.575 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.575 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.575 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.575 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.575 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.575 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.575 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.575 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.575 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.575 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.575 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.575 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.575 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.575 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.575 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.575 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.575 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.575 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.575 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.575 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.575 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.575 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.575 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.575 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.575 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.575 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.575 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.575 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.575 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.575 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.575 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.575 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.575 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.575 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.575 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.575 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.575 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.575 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.575 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.575 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.575 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.575 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.575 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.575 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.575 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.575 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.575 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.575 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.575 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.575 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.575 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.575 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.575 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.575 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.575 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.575 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.575 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.575 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.575 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.575 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.575 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.575 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.575 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.576 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.576 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.576 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.576 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.576 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.576 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.576 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.576 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.576 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.576 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.576 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.576 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.576 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.576 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.576 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.576 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.576 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.576 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.576 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.576 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.576 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.576 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.576 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.576 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.576 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.576 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.576 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.576 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.576 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.576 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.576 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.576 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.576 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.576 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.576 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.576 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.576 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.576 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.576 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.576 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.576 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.576 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.576 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.576 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.576 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.576 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.576 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.576 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.576 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.576 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.576 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.576 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.576 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.576 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.576 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.576 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.576 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.576 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.576 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.576 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.576 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.576 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.576 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.576 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.576 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.576 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.576 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.576 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.576 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.576 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.576 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.576 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.576 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.576 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.576 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.576 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.576 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.576 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.576 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.576 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.576 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.576 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.576 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.576 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.576 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.576 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.576 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.576 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.576 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.576 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.576 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.576 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.576 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.576 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.576 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.576 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.576 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.576 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.576 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.576 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.576 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.576 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.576 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.576 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.576 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.576 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.576 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.576 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.576 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.576 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.576 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.576 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.576 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.576 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.576 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.576 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.576 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.576 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.576 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.576 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.577 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.577 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.577 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.577 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.577 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.577 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.577 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.577 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.577 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.577 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.577 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:45.577 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:45.577 21:25:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:45.577 21:25:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:45.577 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:45.577 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:45.577 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:45.577 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:45.577 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.577 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.577 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.577 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.577 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.577 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.577 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9152664 kB' 'MemAvailable: 10533748 kB' 'Buffers: 2436 kB' 'Cached: 1595480 kB' 'SwapCached: 0 kB' 'Active: 451700 kB' 'Inactive: 1266592 kB' 'Active(anon): 130840 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1266592 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 122224 kB' 'Mapped: 48728 kB' 'Shmem: 10464 kB' 'KReclaimable: 61140 kB' 'Slab: 138612 kB' 'SReclaimable: 61140 kB' 'SUnreclaim: 77472 kB' 'KernelStack: 6272 kB' 'PageTables: 4184 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 358688 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 5064704 kB' 'DirectMap1G: 9437184 kB' 00:03:45.577 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.577 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.577 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.577 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.577 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.577 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.577 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.577 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.577 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.577 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.577 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.577 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.577 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.577 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.577 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.577 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.577 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.577 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.577 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.577 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.577 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.577 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.577 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.577 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.577 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.577 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.577 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.577 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.577 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.577 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.577 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.577 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.577 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.577 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.577 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.577 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.577 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.577 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.577 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.577 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.577 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.577 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.577 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.577 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.577 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.577 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.577 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.577 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.577 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.577 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.577 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.577 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.577 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.577 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.577 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.577 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.577 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.577 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.577 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.577 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.577 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.577 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.577 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.577 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.577 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.577 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.577 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.577 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.577 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.577 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.577 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.577 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.577 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.577 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.577 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.577 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.577 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.578 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.578 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.578 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.578 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.578 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.578 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.578 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.578 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.578 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.578 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.578 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.578 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.578 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.578 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.578 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.578 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.578 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.578 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.578 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.578 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.578 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.578 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.578 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.578 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.578 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.578 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.578 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.578 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.578 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.578 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.578 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.578 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.578 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.578 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.578 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.578 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.578 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.578 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.578 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.578 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.578 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.578 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.578 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.578 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.578 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.578 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.578 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.578 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.578 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.578 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.578 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.578 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.578 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.578 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.578 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.578 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.578 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.578 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.578 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.578 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.578 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.578 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.578 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.578 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.578 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.578 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.578 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.578 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.578 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.578 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.578 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.578 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.578 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.578 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.578 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.578 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.578 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.578 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.578 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.578 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.578 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.578 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.578 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.578 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.578 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.578 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.578 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.578 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.578 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.578 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.578 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.578 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.578 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.578 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.578 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.579 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.579 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.579 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.579 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.579 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.579 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.579 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.579 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.579 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.579 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.579 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.579 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.579 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.579 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.579 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.579 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.579 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.579 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.579 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.579 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.579 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.579 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.579 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.579 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.579 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.579 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.579 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.579 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.579 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.579 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.579 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:45.579 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:45.579 nr_hugepages=512 00:03:45.579 resv_hugepages=0 00:03:45.579 21:25:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:45.579 21:25:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:03:45.579 21:25:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:45.579 surplus_hugepages=0 00:03:45.579 anon_hugepages=0 00:03:45.579 21:25:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:45.579 21:25:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:45.579 21:25:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:45.579 21:25:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:03:45.579 21:25:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:45.579 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:45.579 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:45.579 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:45.579 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:45.579 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.579 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.579 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.579 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.579 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.579 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.579 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9152664 kB' 'MemAvailable: 10533748 kB' 'Buffers: 2436 kB' 'Cached: 1595480 kB' 'SwapCached: 0 kB' 'Active: 451616 kB' 'Inactive: 1266592 kB' 'Active(anon): 130756 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1266592 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 122128 kB' 'Mapped: 48728 kB' 'Shmem: 10464 kB' 'KReclaimable: 61140 kB' 'Slab: 138612 kB' 'SReclaimable: 61140 kB' 'SUnreclaim: 77472 kB' 'KernelStack: 6256 kB' 'PageTables: 4132 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 358688 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 5064704 kB' 'DirectMap1G: 9437184 kB' 00:03:45.579 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.579 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.579 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.579 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.579 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.579 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.579 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.579 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.579 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.579 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.579 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.579 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.579 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.579 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.579 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.579 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.579 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.579 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.579 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.579 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.579 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.579 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.579 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.579 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.579 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.579 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.579 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.579 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.579 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.579 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.579 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.579 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.579 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.579 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.579 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.579 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.579 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.579 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.579 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.579 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.579 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.579 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.579 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.579 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.579 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.579 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.579 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.579 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.579 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.580 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.580 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.580 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.580 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.580 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.580 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.580 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.580 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.580 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.580 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.580 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.580 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.580 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.580 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.580 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.580 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.580 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.580 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.580 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.580 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.580 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.580 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.580 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.580 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.580 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.580 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.580 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.580 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.580 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.580 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.580 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.580 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.580 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.580 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.580 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.580 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.580 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.580 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.580 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.580 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.580 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.580 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.580 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.840 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.840 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.840 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.840 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.840 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.840 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.840 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.840 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.840 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.840 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.840 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.840 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.840 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.840 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.840 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.840 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.840 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.840 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.840 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.840 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.840 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.840 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.840 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.840 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.840 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.840 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.840 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.840 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.840 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.840 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.840 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.840 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.840 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.840 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.840 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.840 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.840 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.840 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.840 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.840 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.840 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.840 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.840 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.840 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.840 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.840 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.840 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.840 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.840 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.840 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.840 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.840 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.840 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.840 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.840 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.840 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.840 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.840 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.840 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.840 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.840 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.840 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.840 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.840 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.840 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.840 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.840 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.840 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.840 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.840 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.840 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.840 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.840 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.840 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.840 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.840 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.840 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.840 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.840 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.840 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.840 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.840 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.840 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.840 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.840 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.840 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.840 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.840 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.840 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.840 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.840 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.840 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.840 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.840 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.840 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.840 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.841 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.841 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.841 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.841 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.841 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.841 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.841 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 512 00:03:45.841 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:45.841 21:25:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:45.841 21:25:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:45.841 21:25:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:03:45.841 21:25:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:45.841 21:25:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:45.841 21:25:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:45.841 21:25:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:45.841 21:25:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:45.841 21:25:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:45.841 21:25:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:45.841 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:45.841 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:45.841 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:45.841 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:45.841 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.841 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:45.841 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:45.841 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.841 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.841 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.841 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.841 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9153184 kB' 'MemUsed: 3088788 kB' 'SwapCached: 0 kB' 'Active: 451968 kB' 'Inactive: 1266592 kB' 'Active(anon): 131108 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1266592 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'FilePages: 1597916 kB' 'Mapped: 48728 kB' 'AnonPages: 122232 kB' 'Shmem: 10464 kB' 'KernelStack: 6272 kB' 'PageTables: 4184 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61140 kB' 'Slab: 138612 kB' 'SReclaimable: 61140 kB' 'SUnreclaim: 77472 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:45.841 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.841 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.841 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.841 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.841 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.841 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.841 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.841 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.841 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.841 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.841 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.841 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.841 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.841 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.841 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.841 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.841 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.841 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.841 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.841 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.841 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.841 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.841 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.841 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.841 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.841 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.841 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.841 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.841 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.841 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.841 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.841 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.841 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.841 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.841 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.841 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.841 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.841 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.841 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.841 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.841 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.841 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.841 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.841 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.841 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.841 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.841 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.841 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.841 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.841 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.841 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.841 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.841 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.841 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.841 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.841 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.841 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.841 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.841 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.841 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.841 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.841 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.841 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.841 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.841 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.841 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.841 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.841 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.841 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.841 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.841 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.841 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.841 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.841 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.841 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.841 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.841 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.841 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.841 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.841 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.841 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.841 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.841 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.841 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.842 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.842 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.842 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.842 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.842 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.842 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.842 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.842 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.842 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.842 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.842 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.842 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.842 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.842 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.842 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.842 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.842 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.842 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.842 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.842 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.842 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.842 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.842 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.842 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.842 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.842 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.842 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.842 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.842 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.842 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.842 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.842 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.842 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.842 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.842 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.842 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.842 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.842 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.842 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.842 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.842 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.842 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.842 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.842 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.842 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.842 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.842 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.842 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.842 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.842 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.842 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.842 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.842 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.842 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.842 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.842 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.842 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.842 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.842 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.842 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.842 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.842 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:45.842 21:25:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:45.842 21:25:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:45.842 21:25:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:45.842 21:25:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:45.842 node0=512 expecting 512 00:03:45.842 ************************************ 00:03:45.842 END TEST custom_alloc 00:03:45.842 ************************************ 00:03:45.842 21:25:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:45.842 21:25:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:45.842 21:25:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:45.842 00:03:45.842 real 0m0.582s 00:03:45.842 user 0m0.302s 00:03:45.842 sys 0m0.285s 00:03:45.842 21:25:30 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:45.842 21:25:30 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:45.842 21:25:30 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:45.842 21:25:30 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:45.842 21:25:30 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:45.842 21:25:30 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:45.842 ************************************ 00:03:45.842 START TEST no_shrink_alloc 00:03:45.842 ************************************ 00:03:45.842 21:25:30 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1125 -- # no_shrink_alloc 00:03:45.842 21:25:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:45.842 21:25:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:45.842 21:25:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:45.842 21:25:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:03:45.842 21:25:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:45.842 21:25:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:45.842 21:25:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:45.842 21:25:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:45.842 21:25:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:45.842 21:25:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:45.842 21:25:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:45.842 21:25:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:45.842 21:25:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:45.842 21:25:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:45.842 21:25:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:45.842 21:25:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:45.842 21:25:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:45.842 21:25:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:45.842 21:25:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:45.842 21:25:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:03:45.842 21:25:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:45.842 21:25:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:46.101 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:46.101 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:46.101 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:46.101 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:46.101 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:46.101 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:46.101 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:46.101 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:46.101 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:46.101 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:46.101 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:46.101 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:46.101 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:46.101 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:46.101 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:46.101 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:46.101 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.101 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.101 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.101 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.101 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.101 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.101 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.101 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8103764 kB' 'MemAvailable: 9484848 kB' 'Buffers: 2436 kB' 'Cached: 1595480 kB' 'SwapCached: 0 kB' 'Active: 452372 kB' 'Inactive: 1266592 kB' 'Active(anon): 131512 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1266592 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 122664 kB' 'Mapped: 48832 kB' 'Shmem: 10464 kB' 'KReclaimable: 61140 kB' 'Slab: 138640 kB' 'SReclaimable: 61140 kB' 'SUnreclaim: 77500 kB' 'KernelStack: 6244 kB' 'PageTables: 4208 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 358688 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 5064704 kB' 'DirectMap1G: 9437184 kB' 00:03:46.101 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.101 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.101 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.101 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.101 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.101 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.101 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.101 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.101 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.101 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.101 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.101 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.101 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.101 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.101 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.101 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.101 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.101 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.101 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.101 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.101 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.101 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.101 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.101 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.101 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.101 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.101 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.101 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.101 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.101 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.101 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.101 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.101 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.101 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.101 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.101 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.101 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.101 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.101 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.101 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.102 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.102 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.102 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.102 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.102 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.102 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.102 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.102 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.102 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.102 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.102 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.102 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.102 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.102 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.102 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.102 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.102 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.102 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.102 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.102 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.102 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.102 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.102 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.102 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.102 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.102 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.102 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.102 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.102 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.102 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.102 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.102 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.102 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.102 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.102 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.102 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.102 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.102 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.102 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.102 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.102 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.102 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.102 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.102 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.102 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.102 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.102 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.102 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.102 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.102 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.102 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.102 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.102 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.102 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.102 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.102 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.102 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.102 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.102 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.102 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.102 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.102 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.102 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.102 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.102 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.102 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.102 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.102 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.102 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.102 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.102 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.102 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.102 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.102 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.102 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.102 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.102 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.102 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.102 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.102 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.102 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.102 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.102 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.102 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.102 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.102 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.102 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.102 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.102 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.102 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.102 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.102 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.102 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.102 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.102 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.102 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.102 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.102 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.102 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.102 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.102 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.102 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.102 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.102 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.102 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.102 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.102 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.102 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.102 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.102 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.102 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.102 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.102 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.102 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.102 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.102 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.102 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.102 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.103 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.103 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.103 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.103 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:46.103 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:46.103 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:46.103 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:46.103 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:46.103 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:46.103 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:46.103 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:46.103 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.103 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.103 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.103 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.103 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.103 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.103 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.103 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8103764 kB' 'MemAvailable: 9484848 kB' 'Buffers: 2436 kB' 'Cached: 1595480 kB' 'SwapCached: 0 kB' 'Active: 452016 kB' 'Inactive: 1266592 kB' 'Active(anon): 131156 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1266592 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 122320 kB' 'Mapped: 48728 kB' 'Shmem: 10464 kB' 'KReclaimable: 61140 kB' 'Slab: 138644 kB' 'SReclaimable: 61140 kB' 'SUnreclaim: 77504 kB' 'KernelStack: 6288 kB' 'PageTables: 4232 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 358320 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 5064704 kB' 'DirectMap1G: 9437184 kB' 00:03:46.103 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.103 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.103 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.103 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.103 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.103 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.103 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.103 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.103 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.103 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.103 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.103 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.103 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.103 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.103 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.103 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.103 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.103 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.103 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.103 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.103 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.103 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.103 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.103 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.103 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.103 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.103 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.103 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.103 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.103 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.103 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.103 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.103 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.103 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.103 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.103 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.103 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.103 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.103 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.364 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.364 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.364 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.364 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.364 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.364 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.364 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.364 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.364 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.364 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.364 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.364 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.364 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.364 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.364 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.364 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.364 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.364 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.364 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.364 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.364 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.364 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.364 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.364 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.364 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.364 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.364 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.364 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.364 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.364 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.364 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.364 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.364 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.364 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.364 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.364 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.364 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.364 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.364 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.364 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.364 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.364 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.364 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.364 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.364 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.364 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.364 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.364 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.364 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.364 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.364 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.364 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.364 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.364 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.364 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.364 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.364 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.364 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.364 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.364 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.364 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.364 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.364 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.364 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.364 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.364 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.364 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.364 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.364 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.364 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.364 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.364 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.364 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.364 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.364 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.364 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.364 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.364 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.365 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.365 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.365 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.365 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.365 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.365 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.365 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.365 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.365 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.365 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.365 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.365 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.365 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.365 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.365 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.365 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.365 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.365 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.365 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.365 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.365 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.365 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.365 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.365 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.365 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.365 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.365 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.365 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.365 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.365 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.365 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.365 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.365 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.365 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.365 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.365 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.365 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.365 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.365 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.365 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.365 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.365 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.365 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.365 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.365 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.365 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.365 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.365 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.365 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.365 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.365 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.365 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.365 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.365 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.365 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.365 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.365 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.365 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.365 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.365 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.365 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.365 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.365 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.365 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.365 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.365 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.365 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.365 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.365 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.365 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.365 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.365 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.365 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.365 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.365 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.365 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.365 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.365 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.365 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.365 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.365 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.365 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.365 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.365 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.365 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.365 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.365 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.365 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.365 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:46.365 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:46.365 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:46.365 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:46.365 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:46.365 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:46.365 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:46.365 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:46.365 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.365 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.365 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.365 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.365 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.365 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.365 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.365 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8103764 kB' 'MemAvailable: 9484848 kB' 'Buffers: 2436 kB' 'Cached: 1595480 kB' 'SwapCached: 0 kB' 'Active: 451992 kB' 'Inactive: 1266592 kB' 'Active(anon): 131132 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1266592 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 122364 kB' 'Mapped: 48728 kB' 'Shmem: 10464 kB' 'KReclaimable: 61140 kB' 'Slab: 138644 kB' 'SReclaimable: 61140 kB' 'SUnreclaim: 77504 kB' 'KernelStack: 6272 kB' 'PageTables: 4200 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 358688 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54580 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 5064704 kB' 'DirectMap1G: 9437184 kB' 00:03:46.365 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.365 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.366 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.366 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.366 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.366 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.366 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.366 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.366 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.366 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.366 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.366 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.366 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.366 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.366 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.366 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.366 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.366 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.366 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.366 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.366 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.366 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.366 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.366 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.366 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.366 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.366 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.366 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.366 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.366 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.366 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.366 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.366 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.366 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.366 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.366 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.366 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.366 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.366 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.366 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.366 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.366 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.366 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.366 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.366 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.366 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.366 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.366 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.366 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.366 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.366 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.366 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.366 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.366 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.366 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.366 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.366 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.366 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.366 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.366 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.366 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.366 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.366 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.366 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.366 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.366 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.366 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.366 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.366 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.366 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.366 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.366 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.366 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.366 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.366 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.366 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.366 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.366 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.366 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.366 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.366 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.366 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.366 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.366 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.366 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.366 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.366 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.366 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.366 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.366 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.366 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.366 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.366 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.366 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.366 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.366 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.366 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.366 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.366 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.366 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.366 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.366 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.366 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.366 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.366 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.366 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.366 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.366 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.366 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.366 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.366 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.366 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.366 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.366 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.366 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.366 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.366 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.366 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.366 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.366 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.366 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.366 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.367 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.367 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.367 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.367 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.367 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.367 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.367 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.367 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.367 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.367 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.367 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.367 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.367 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.367 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.367 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.367 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.367 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.367 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.367 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.367 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.367 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.367 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.367 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.367 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.367 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.367 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.367 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.367 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.367 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.367 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.367 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.367 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.367 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.367 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.367 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.367 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.367 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.367 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.367 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.367 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.367 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.367 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.367 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.367 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.367 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.367 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.367 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.367 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.367 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.367 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.367 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.367 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.367 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.367 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.367 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.367 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.367 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.367 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.367 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.367 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.367 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.367 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.367 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.367 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.367 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.367 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.367 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.367 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.367 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.367 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.367 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.367 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.367 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.367 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.367 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.367 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.367 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.367 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.367 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.367 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:46.367 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:46.367 nr_hugepages=1024 00:03:46.367 resv_hugepages=0 00:03:46.367 surplus_hugepages=0 00:03:46.367 anon_hugepages=0 00:03:46.367 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:46.367 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:46.367 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:46.367 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:46.367 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:46.367 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:46.367 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:46.367 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:46.367 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:46.367 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:46.367 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:46.367 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:46.367 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.367 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.367 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.367 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.367 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.367 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.367 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.367 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8103764 kB' 'MemAvailable: 9484852 kB' 'Buffers: 2436 kB' 'Cached: 1595484 kB' 'SwapCached: 0 kB' 'Active: 451908 kB' 'Inactive: 1266596 kB' 'Active(anon): 131048 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1266596 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 122188 kB' 'Mapped: 48728 kB' 'Shmem: 10464 kB' 'KReclaimable: 61140 kB' 'Slab: 138644 kB' 'SReclaimable: 61140 kB' 'SUnreclaim: 77504 kB' 'KernelStack: 6272 kB' 'PageTables: 4184 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 358688 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54580 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 5064704 kB' 'DirectMap1G: 9437184 kB' 00:03:46.367 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.367 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.367 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.367 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.368 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.368 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.368 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.368 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.368 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.368 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.368 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.368 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.368 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.368 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.368 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.368 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.368 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.368 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.368 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.368 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.368 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.368 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.368 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.368 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.368 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.368 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.368 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.368 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.368 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.368 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.368 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.368 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.368 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.368 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.368 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.368 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.368 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.368 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.368 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.368 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.368 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.368 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.368 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.368 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.368 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.368 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.368 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.368 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.368 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.368 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.368 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.368 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.368 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.368 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.368 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.368 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.368 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.368 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.368 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.368 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.368 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.368 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.368 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.368 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.368 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.368 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.368 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.368 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.368 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.368 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.368 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.368 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.368 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.368 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.368 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.368 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.368 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.368 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.368 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.368 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.368 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.368 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.368 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.368 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.368 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.368 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.368 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.368 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.368 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.368 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.368 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.368 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.368 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.368 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.368 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.368 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.368 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.368 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.368 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.368 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.368 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.368 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.368 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.368 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.368 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.368 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.369 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.369 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.369 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.369 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.369 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.369 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.369 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.369 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.369 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.369 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.369 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.369 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.369 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.369 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.369 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.369 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.369 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.369 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.369 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.369 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.369 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.369 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.369 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.369 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.369 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.369 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.369 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.369 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.369 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.369 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.369 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.369 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.369 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.369 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.369 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.369 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.369 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.369 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.369 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.369 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.369 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.369 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.369 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.369 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.369 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.369 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.369 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.369 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.369 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.369 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.369 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.369 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.369 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.369 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.369 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.369 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.369 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.369 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.369 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.369 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.369 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.369 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.369 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.369 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.369 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.369 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.369 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.369 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.369 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.369 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.369 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.369 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.369 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.369 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.369 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.369 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.369 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.369 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.369 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.369 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.369 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.369 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.369 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.369 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.369 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.369 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.369 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.369 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:46.369 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:46.369 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:46.369 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:46.369 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:46.369 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:46.369 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:46.369 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:46.369 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:46.369 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:46.369 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:46.369 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:46.369 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:46.369 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:46.369 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:46.369 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:46.369 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.369 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:46.369 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:46.369 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.369 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.369 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.370 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8103764 kB' 'MemUsed: 4138208 kB' 'SwapCached: 0 kB' 'Active: 451928 kB' 'Inactive: 1266596 kB' 'Active(anon): 131068 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1266596 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'FilePages: 1597920 kB' 'Mapped: 48728 kB' 'AnonPages: 122204 kB' 'Shmem: 10464 kB' 'KernelStack: 6272 kB' 'PageTables: 4184 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61140 kB' 'Slab: 138644 kB' 'SReclaimable: 61140 kB' 'SUnreclaim: 77504 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:46.370 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.370 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.370 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.370 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.370 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.370 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.370 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.370 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.370 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.370 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.370 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.370 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.370 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.370 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.370 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.370 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.370 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.370 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.370 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.370 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.370 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.370 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.370 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.370 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.370 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.370 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.370 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.370 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.370 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.370 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.370 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.370 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.370 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.370 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.370 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.370 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.370 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.370 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.370 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.370 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.370 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.370 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.370 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.370 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.370 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.370 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.370 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.370 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.370 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.370 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.370 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.370 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.370 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.370 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.370 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.370 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.370 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.370 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.370 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.370 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.370 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.370 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.370 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.370 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.370 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.370 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.370 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.370 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.370 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.370 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.370 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.370 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.370 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.370 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.370 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.370 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.370 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.370 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.370 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.370 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.370 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.370 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.370 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.370 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.370 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.370 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.370 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.370 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.370 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.370 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.370 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.370 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.370 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.370 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.370 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.370 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.370 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.370 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.370 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.370 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.370 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.370 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.370 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.370 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.370 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.370 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.370 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.370 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.370 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.370 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.370 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.370 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.370 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.370 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.370 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.371 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.371 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.371 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.371 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.371 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.371 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.371 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.371 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.371 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.371 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.371 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.371 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.371 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.371 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.371 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.371 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.371 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.371 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.371 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.371 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.371 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.371 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.371 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.371 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.371 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.371 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.371 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.371 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.371 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.371 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.371 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.371 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:46.371 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:46.371 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:46.371 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:46.371 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:46.371 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:46.371 node0=1024 expecting 1024 00:03:46.371 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:46.371 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:46.371 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:46.371 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:46.371 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:03:46.371 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:46.371 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:46.630 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:46.630 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:46.630 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:46.630 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:46.630 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:46.630 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:46.630 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:46.630 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:46.630 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:46.630 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:46.630 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:46.630 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:46.630 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:46.630 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:46.630 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:46.630 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:46.630 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:46.630 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.630 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.630 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.630 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.630 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.630 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.630 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8102512 kB' 'MemAvailable: 9483588 kB' 'Buffers: 2436 kB' 'Cached: 1595484 kB' 'SwapCached: 0 kB' 'Active: 448360 kB' 'Inactive: 1266596 kB' 'Active(anon): 127500 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1266596 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 118716 kB' 'Mapped: 48140 kB' 'Shmem: 10464 kB' 'KReclaimable: 61120 kB' 'Slab: 138512 kB' 'SReclaimable: 61120 kB' 'SUnreclaim: 77392 kB' 'KernelStack: 6228 kB' 'PageTables: 3728 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 345296 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54580 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 5064704 kB' 'DirectMap1G: 9437184 kB' 00:03:46.630 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.630 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.630 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.630 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.630 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.630 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.630 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.630 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.630 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.630 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.630 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.630 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.630 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.630 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.630 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.630 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.630 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.630 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.630 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.630 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.630 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.630 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.630 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.630 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.630 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.630 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.630 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.630 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.630 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.630 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.630 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.630 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.630 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.630 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.630 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.630 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.630 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.630 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.630 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.630 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.630 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.630 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.630 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.630 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.631 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.631 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.631 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.631 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.631 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.631 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.631 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.631 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.631 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.631 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.631 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.631 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.631 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.631 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.631 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.631 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.631 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.631 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.631 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.631 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.631 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.631 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.631 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.631 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.631 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.631 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.631 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.631 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.631 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.631 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.631 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.631 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.631 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.631 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.631 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.631 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.631 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.631 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.631 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.631 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.631 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.631 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.631 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.631 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.631 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.631 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.631 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.631 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.631 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.631 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.631 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.631 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.631 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.631 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.631 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.631 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.631 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.631 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.631 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.631 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.631 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.631 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.631 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.631 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.631 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.631 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.631 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.631 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.631 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.631 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.631 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.631 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.631 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.631 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.631 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.631 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.631 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.631 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.631 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.631 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.631 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.631 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.631 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.631 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.631 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.631 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.631 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.631 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.631 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.631 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.631 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.631 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.631 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.631 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.631 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.631 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.631 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.631 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.631 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.631 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.631 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.631 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.631 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.631 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.631 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.631 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.631 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.631 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.631 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.631 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.631 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.631 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.631 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.631 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.631 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.631 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.631 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.631 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.631 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:46.632 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:46.632 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:46.632 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:46.632 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:46.632 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:46.632 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:46.632 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:46.632 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.632 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.632 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.632 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.632 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.632 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.632 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.632 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8103024 kB' 'MemAvailable: 9484100 kB' 'Buffers: 2436 kB' 'Cached: 1595484 kB' 'SwapCached: 0 kB' 'Active: 447828 kB' 'Inactive: 1266596 kB' 'Active(anon): 126968 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1266596 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 118096 kB' 'Mapped: 47988 kB' 'Shmem: 10464 kB' 'KReclaimable: 61120 kB' 'Slab: 138512 kB' 'SReclaimable: 61120 kB' 'SUnreclaim: 77392 kB' 'KernelStack: 6176 kB' 'PageTables: 3760 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 345412 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54532 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 5064704 kB' 'DirectMap1G: 9437184 kB' 00:03:46.632 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.632 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.632 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.632 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.632 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.632 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.632 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.632 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.632 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.632 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.632 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.632 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.632 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.632 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.632 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.632 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.895 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.895 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.895 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.895 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.895 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.895 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.895 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.895 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.895 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.895 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.895 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.895 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.895 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.895 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.895 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.895 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.895 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.895 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.895 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.895 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.895 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.895 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.895 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.895 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.895 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.895 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.895 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.895 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.895 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.895 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.895 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.895 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.895 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.895 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.895 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.895 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.895 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.895 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.895 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.895 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.895 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.895 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.895 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.895 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.895 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.895 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.895 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.895 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.895 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.895 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.895 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.895 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.895 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.895 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.895 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.895 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.895 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.895 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.895 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.895 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.895 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.895 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.895 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.895 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.895 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.895 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.895 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.895 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.895 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.895 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.895 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.895 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.895 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.895 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.895 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.895 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.895 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.895 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.895 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.895 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.895 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.895 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.895 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.895 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.895 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.895 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.895 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.895 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.895 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.895 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.895 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.895 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.895 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.895 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.895 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.895 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.895 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.895 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.895 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.895 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.895 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.895 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.895 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.895 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.896 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.896 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.896 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.896 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.896 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.896 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.896 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.896 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.896 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.896 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.896 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.896 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.896 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.896 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.896 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.896 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.896 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.896 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.896 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.896 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.896 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.896 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.896 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.896 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.896 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.896 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.896 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.896 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.896 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.896 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.896 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.896 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.896 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.896 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.896 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.896 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.896 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.896 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.896 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.896 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.896 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.896 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.896 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.896 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.896 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.896 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.896 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.896 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.896 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.896 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.896 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.896 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.896 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.896 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.896 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.896 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.896 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.896 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.896 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.896 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.896 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.896 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.896 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.896 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.896 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.896 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.896 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.896 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.896 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.896 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.896 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.896 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.896 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.896 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.896 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.896 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.896 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.896 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.896 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.896 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.896 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.896 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.896 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.896 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.896 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.896 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:46.896 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:46.896 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:46.896 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:46.896 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:46.896 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:46.896 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:46.896 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:46.896 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.896 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.896 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.896 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.896 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.896 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.896 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.896 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8103244 kB' 'MemAvailable: 9484316 kB' 'Buffers: 2436 kB' 'Cached: 1595480 kB' 'SwapCached: 0 kB' 'Active: 447888 kB' 'Inactive: 1266592 kB' 'Active(anon): 127028 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1266592 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 118188 kB' 'Mapped: 47992 kB' 'Shmem: 10464 kB' 'KReclaimable: 61120 kB' 'Slab: 138508 kB' 'SReclaimable: 61120 kB' 'SUnreclaim: 77388 kB' 'KernelStack: 6192 kB' 'PageTables: 3816 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 345296 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54484 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 5064704 kB' 'DirectMap1G: 9437184 kB' 00:03:46.896 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.896 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.896 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.897 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.897 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.897 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.897 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.897 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.897 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.897 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.897 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.897 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.897 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.897 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.897 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.897 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.897 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.897 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.897 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.897 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.897 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.897 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.897 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.897 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.897 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.897 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.897 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.897 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.897 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.897 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.897 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.897 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.897 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.897 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.897 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.897 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.897 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.897 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.897 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.897 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.897 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.897 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.897 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.897 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.897 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.897 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.897 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.897 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.897 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.897 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.897 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.897 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.897 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.897 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.897 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.897 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.897 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.897 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.897 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.897 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.897 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.897 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.897 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.897 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.897 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.897 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.897 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.897 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.897 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.897 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.897 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.897 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.897 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.897 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.897 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.897 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.897 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.897 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.897 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.897 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.897 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.897 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.897 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.897 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.897 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.897 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.897 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.897 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.897 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.897 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.897 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.897 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.897 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.897 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.897 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.897 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.897 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.897 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.897 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.897 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.897 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.897 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.897 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.897 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.897 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.897 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.897 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.897 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.897 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.897 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.897 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.897 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.897 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.897 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.897 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.897 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.897 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.897 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.897 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.897 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.897 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.897 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.898 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.898 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.898 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.898 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.898 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.898 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.898 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.898 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.898 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.898 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.898 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.898 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.898 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.898 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.898 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.898 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.898 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.898 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.898 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.898 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.898 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.898 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.898 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.898 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.898 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.898 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.898 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.898 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.898 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.898 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.898 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.898 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.898 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.898 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.898 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.898 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.898 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.898 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.898 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.898 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.898 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.898 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.898 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.898 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.898 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.898 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.898 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.898 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.898 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.898 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.898 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.898 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.898 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.898 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.898 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.898 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.898 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.898 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.898 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.898 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.898 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.898 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.898 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.898 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.898 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.898 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.898 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.898 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.898 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.898 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.898 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.898 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.898 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.898 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.898 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.898 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.898 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.898 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.898 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.898 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:46.898 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:46.898 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:46.898 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:46.898 nr_hugepages=1024 00:03:46.898 resv_hugepages=0 00:03:46.898 surplus_hugepages=0 00:03:46.898 anon_hugepages=0 00:03:46.898 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:46.898 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:46.898 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:46.898 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:46.898 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:46.898 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:46.898 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:46.898 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:46.898 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:46.898 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:46.898 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.898 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.898 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.898 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.898 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.898 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.898 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.898 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8103244 kB' 'MemAvailable: 9484320 kB' 'Buffers: 2436 kB' 'Cached: 1595484 kB' 'SwapCached: 0 kB' 'Active: 447908 kB' 'Inactive: 1266596 kB' 'Active(anon): 127048 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1266596 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 118216 kB' 'Mapped: 47992 kB' 'Shmem: 10464 kB' 'KReclaimable: 61120 kB' 'Slab: 138504 kB' 'SReclaimable: 61120 kB' 'SUnreclaim: 77384 kB' 'KernelStack: 6192 kB' 'PageTables: 3816 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 345296 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54484 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 5064704 kB' 'DirectMap1G: 9437184 kB' 00:03:46.898 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.898 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.899 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.899 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.899 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.899 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.899 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.899 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.899 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.899 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.899 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.899 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.899 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.899 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.899 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.899 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.899 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.899 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.899 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.899 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.899 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.899 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.899 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.899 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.899 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.899 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.899 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.899 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.899 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.899 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.899 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.899 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.899 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.899 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.899 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.899 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.899 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.899 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.899 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.899 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.899 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.899 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.899 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.899 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.899 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.899 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.899 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.899 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.899 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.899 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.899 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.899 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.899 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.899 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.899 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.899 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.899 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.899 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.899 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.899 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.899 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.899 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.899 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.899 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.899 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.899 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.899 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.899 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.899 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.899 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.899 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.899 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.899 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.899 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.899 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.899 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.899 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.899 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.899 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.899 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.899 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.899 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.899 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.899 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.899 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.899 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.899 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.899 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.899 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.899 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.899 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.899 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.899 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.899 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.899 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.899 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.900 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.900 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.900 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.900 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.900 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.900 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.900 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.900 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.900 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.900 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.900 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.900 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.900 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.900 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.900 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.900 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.900 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.900 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.900 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.900 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.900 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.900 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.900 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.900 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.900 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.900 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.900 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.900 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.900 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.900 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.900 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.900 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.900 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.900 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.900 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.900 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.900 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.900 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.900 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.900 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.900 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.900 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.900 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.900 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.900 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.900 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.900 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.900 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.900 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.900 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.900 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.900 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.900 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.900 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.900 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.900 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.900 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.900 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.900 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.900 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.900 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.900 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.900 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.900 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.900 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.900 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.900 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.900 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.900 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.900 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.900 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.900 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.900 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.900 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.900 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.900 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.900 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.900 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.900 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.900 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.900 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.900 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.900 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.900 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.900 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.900 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.900 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.900 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.900 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.900 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.900 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.900 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.900 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.900 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.900 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.900 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.900 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.900 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:46.900 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:46.900 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:46.900 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:46.900 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:46.900 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:46.900 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:46.900 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:46.900 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:46.900 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:46.900 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:46.900 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:46.900 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:46.900 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:46.900 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:46.900 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:46.900 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.900 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:46.900 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:46.900 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.900 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.900 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.901 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8102996 kB' 'MemUsed: 4138976 kB' 'SwapCached: 0 kB' 'Active: 447808 kB' 'Inactive: 1266596 kB' 'Active(anon): 126948 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1266596 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'FilePages: 1597920 kB' 'Mapped: 47988 kB' 'AnonPages: 118100 kB' 'Shmem: 10464 kB' 'KernelStack: 6176 kB' 'PageTables: 3760 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61120 kB' 'Slab: 138500 kB' 'SReclaimable: 61120 kB' 'SUnreclaim: 77380 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:46.901 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.901 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.901 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.901 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.901 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.901 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.901 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.901 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.901 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.901 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.901 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.901 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.901 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.901 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.901 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.901 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.901 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.901 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.901 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.901 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.901 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.901 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.901 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.901 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.901 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.901 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.901 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.901 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.901 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.901 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.901 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.901 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.901 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.901 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.901 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.901 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.901 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.901 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.901 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.901 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.901 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.901 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.901 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.901 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.901 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.901 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.901 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.901 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.901 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.901 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.901 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.901 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.901 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.901 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.901 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.901 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.901 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.901 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.901 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.901 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.901 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.901 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.901 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.901 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.901 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.901 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.901 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.901 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.901 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.901 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.901 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.901 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.901 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.901 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.901 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.901 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.901 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.901 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.901 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.901 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.901 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.901 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.901 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.901 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.901 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.901 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.901 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.901 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.901 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.901 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.901 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.901 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.901 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.901 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.901 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.901 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.901 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.901 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.901 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.901 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.901 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.901 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.901 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.901 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.901 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.901 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.901 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.901 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.901 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.901 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.902 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.902 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.902 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.902 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.902 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.902 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.902 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.902 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.902 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.902 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.902 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.902 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.902 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.902 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.902 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.902 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.902 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.902 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.902 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.902 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.902 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.902 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.902 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.902 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.902 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.902 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.902 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.902 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.902 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.902 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.902 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.902 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.902 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.902 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.902 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.902 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.902 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:46.902 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:46.902 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:46.902 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:46.902 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:46.902 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:46.902 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:46.902 node0=1024 expecting 1024 00:03:46.902 21:25:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:46.902 ************************************ 00:03:46.902 END TEST no_shrink_alloc 00:03:46.902 ************************************ 00:03:46.902 00:03:46.902 real 0m1.095s 00:03:46.902 user 0m0.542s 00:03:46.902 sys 0m0.571s 00:03:46.902 21:25:31 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:46.902 21:25:31 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:46.902 21:25:31 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:03:46.902 21:25:31 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:46.902 21:25:31 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:46.902 21:25:31 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:46.902 21:25:31 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:46.902 21:25:31 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:46.902 21:25:31 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:46.902 21:25:31 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:46.902 21:25:31 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:46.902 ************************************ 00:03:46.902 END TEST hugepages 00:03:46.902 ************************************ 00:03:46.902 00:03:46.902 real 0m4.841s 00:03:46.902 user 0m2.347s 00:03:46.902 sys 0m2.440s 00:03:46.902 21:25:31 setup.sh.hugepages -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:46.902 21:25:31 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:46.902 21:25:31 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:03:46.902 21:25:31 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:46.902 21:25:31 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:46.902 21:25:31 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:46.902 ************************************ 00:03:46.902 START TEST driver 00:03:46.902 ************************************ 00:03:46.902 21:25:31 setup.sh.driver -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:03:47.161 * Looking for test storage... 00:03:47.161 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:47.161 21:25:31 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:03:47.161 21:25:31 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:47.161 21:25:31 setup.sh.driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:47.729 21:25:32 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:47.729 21:25:32 setup.sh.driver -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:47.729 21:25:32 setup.sh.driver -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:47.729 21:25:32 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:47.729 ************************************ 00:03:47.729 START TEST guess_driver 00:03:47.729 ************************************ 00:03:47.729 21:25:32 setup.sh.driver.guess_driver -- common/autotest_common.sh@1125 -- # guess_driver 00:03:47.729 21:25:32 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:47.729 21:25:32 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:03:47.729 21:25:32 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:03:47.729 21:25:32 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:03:47.729 21:25:32 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:03:47.729 21:25:32 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:47.729 21:25:32 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:47.729 21:25:32 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:47.729 21:25:32 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:03:47.729 21:25:32 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:03:47.729 21:25:32 setup.sh.driver.guess_driver -- setup/driver.sh@32 -- # return 1 00:03:47.729 21:25:32 setup.sh.driver.guess_driver -- setup/driver.sh@38 -- # uio 00:03:47.729 21:25:32 setup.sh.driver.guess_driver -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:03:47.729 21:25:32 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod uio_pci_generic 00:03:47.729 21:25:32 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep uio_pci_generic 00:03:47.729 21:25:32 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:03:47.729 21:25:32 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:03:47.729 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:03:47.729 21:25:32 setup.sh.driver.guess_driver -- setup/driver.sh@39 -- # echo uio_pci_generic 00:03:47.729 21:25:32 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:03:47.729 21:25:32 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:47.729 21:25:32 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:03:47.729 Looking for driver=uio_pci_generic 00:03:47.729 21:25:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:47.729 21:25:32 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:03:47.729 21:25:32 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:03:47.729 21:25:32 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:48.306 21:25:33 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:03:48.306 21:25:33 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:03:48.306 21:25:33 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:48.306 21:25:33 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:48.306 21:25:33 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:03:48.306 21:25:33 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:48.564 21:25:33 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:48.564 21:25:33 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:03:48.564 21:25:33 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:48.564 21:25:33 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:48.564 21:25:33 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:03:48.564 21:25:33 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:48.564 21:25:33 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:49.129 00:03:49.129 real 0m1.477s 00:03:49.129 user 0m0.561s 00:03:49.129 sys 0m0.905s 00:03:49.130 21:25:33 setup.sh.driver.guess_driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:49.130 21:25:33 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:03:49.130 ************************************ 00:03:49.130 END TEST guess_driver 00:03:49.130 ************************************ 00:03:49.130 ************************************ 00:03:49.130 END TEST driver 00:03:49.130 ************************************ 00:03:49.130 00:03:49.130 real 0m2.148s 00:03:49.130 user 0m0.807s 00:03:49.130 sys 0m1.389s 00:03:49.130 21:25:34 setup.sh.driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:49.130 21:25:34 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:49.130 21:25:34 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:03:49.130 21:25:34 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:49.130 21:25:34 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:49.130 21:25:34 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:49.130 ************************************ 00:03:49.130 START TEST devices 00:03:49.130 ************************************ 00:03:49.130 21:25:34 setup.sh.devices -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:03:49.388 * Looking for test storage... 00:03:49.388 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:49.388 21:25:34 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:49.388 21:25:34 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:03:49.388 21:25:34 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:49.388 21:25:34 setup.sh.devices -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:49.955 21:25:34 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:03:49.955 21:25:34 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:49.955 21:25:34 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:49.955 21:25:34 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:49.955 21:25:34 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:49.955 21:25:34 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:49.955 21:25:34 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:49.955 21:25:34 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:49.955 21:25:34 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:49.955 21:25:34 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:49.955 21:25:34 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n2 00:03:49.955 21:25:34 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:03:49.955 21:25:34 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:03:49.955 21:25:34 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:49.955 21:25:34 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:49.955 21:25:34 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n3 00:03:49.955 21:25:34 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:03:49.955 21:25:34 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:03:49.955 21:25:34 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:49.955 21:25:34 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:49.955 21:25:34 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:03:49.955 21:25:34 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:03:49.955 21:25:34 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:49.955 21:25:34 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:49.955 21:25:34 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:03:49.955 21:25:34 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:03:49.955 21:25:34 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:49.955 21:25:34 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:49.955 21:25:34 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:49.955 21:25:34 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:49.955 21:25:34 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:49.955 21:25:34 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:49.955 21:25:34 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:03:49.955 21:25:34 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:03:49.955 21:25:34 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:49.955 21:25:34 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:03:49.955 21:25:34 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:03:49.955 No valid GPT data, bailing 00:03:49.955 21:25:34 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:50.215 21:25:34 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:50.215 21:25:34 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:50.215 21:25:34 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:50.215 21:25:34 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:50.215 21:25:34 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:50.215 21:25:34 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:03:50.215 21:25:34 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:03:50.215 21:25:34 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:50.215 21:25:34 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:03:50.215 21:25:34 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:50.215 21:25:34 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n2 00:03:50.215 21:25:34 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:50.215 21:25:34 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:03:50.215 21:25:34 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:03:50.215 21:25:34 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n2 00:03:50.215 21:25:34 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:03:50.215 21:25:34 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:03:50.215 No valid GPT data, bailing 00:03:50.215 21:25:35 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:03:50.215 21:25:35 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:50.215 21:25:35 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:50.215 21:25:35 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n2 00:03:50.215 21:25:35 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n2 00:03:50.215 21:25:35 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n2 ]] 00:03:50.215 21:25:35 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:03:50.215 21:25:35 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:03:50.215 21:25:35 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:50.215 21:25:35 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:03:50.215 21:25:35 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:50.215 21:25:35 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n3 00:03:50.215 21:25:35 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:50.215 21:25:35 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:03:50.215 21:25:35 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:03:50.215 21:25:35 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n3 00:03:50.215 21:25:35 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:03:50.215 21:25:35 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:03:50.215 No valid GPT data, bailing 00:03:50.215 21:25:35 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:03:50.215 21:25:35 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:50.215 21:25:35 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:50.215 21:25:35 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n3 00:03:50.215 21:25:35 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n3 00:03:50.215 21:25:35 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n3 ]] 00:03:50.215 21:25:35 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:03:50.215 21:25:35 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:03:50.215 21:25:35 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:50.215 21:25:35 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:03:50.215 21:25:35 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:50.215 21:25:35 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:03:50.215 21:25:35 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:03:50.215 21:25:35 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:03:50.215 21:25:35 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:03:50.215 21:25:35 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:03:50.215 21:25:35 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:03:50.215 21:25:35 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:03:50.474 No valid GPT data, bailing 00:03:50.474 21:25:35 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:50.474 21:25:35 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:50.474 21:25:35 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:50.474 21:25:35 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:03:50.474 21:25:35 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme1n1 00:03:50.474 21:25:35 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:03:50.474 21:25:35 setup.sh.devices -- setup/common.sh@80 -- # echo 5368709120 00:03:50.474 21:25:35 setup.sh.devices -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:03:50.474 21:25:35 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:50.474 21:25:35 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:03:50.474 21:25:35 setup.sh.devices -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:03:50.474 21:25:35 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:50.474 21:25:35 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:50.474 21:25:35 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:50.474 21:25:35 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:50.474 21:25:35 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:50.474 ************************************ 00:03:50.474 START TEST nvme_mount 00:03:50.474 ************************************ 00:03:50.474 21:25:35 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1125 -- # nvme_mount 00:03:50.474 21:25:35 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:50.474 21:25:35 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:50.474 21:25:35 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:50.474 21:25:35 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:50.474 21:25:35 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:50.474 21:25:35 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:50.474 21:25:35 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:03:50.474 21:25:35 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:50.474 21:25:35 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:50.474 21:25:35 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:03:50.474 21:25:35 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:03:50.474 21:25:35 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:50.474 21:25:35 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:50.474 21:25:35 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:50.474 21:25:35 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:50.474 21:25:35 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:50.474 21:25:35 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:03:50.474 21:25:35 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:50.475 21:25:35 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:51.411 Creating new GPT entries in memory. 00:03:51.411 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:51.411 other utilities. 00:03:51.411 21:25:36 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:51.411 21:25:36 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:51.411 21:25:36 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:51.411 21:25:36 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:51.411 21:25:36 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:03:52.347 Creating new GPT entries in memory. 00:03:52.347 The operation has completed successfully. 00:03:52.347 21:25:37 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:52.347 21:25:37 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:52.347 21:25:37 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 56958 00:03:52.347 21:25:37 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:52.347 21:25:37 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:03:52.347 21:25:37 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:52.347 21:25:37 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:52.347 21:25:37 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:52.347 21:25:37 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:52.606 21:25:37 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:52.606 21:25:37 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:03:52.606 21:25:37 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:52.606 21:25:37 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:52.606 21:25:37 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:52.606 21:25:37 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:52.606 21:25:37 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:52.606 21:25:37 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:52.606 21:25:37 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:52.606 21:25:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.606 21:25:37 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:03:52.606 21:25:37 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:52.606 21:25:37 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:52.606 21:25:37 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:52.606 21:25:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:52.606 21:25:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:52.606 21:25:37 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:52.606 21:25:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.606 21:25:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:52.606 21:25:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.864 21:25:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:52.864 21:25:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.864 21:25:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:52.864 21:25:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.864 21:25:37 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:52.864 21:25:37 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:03:52.864 21:25:37 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:52.864 21:25:37 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:52.864 21:25:37 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:52.864 21:25:37 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:03:52.864 21:25:37 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:52.864 21:25:37 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:52.864 21:25:37 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:52.864 21:25:37 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:52.864 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:52.864 21:25:37 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:52.864 21:25:37 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:53.123 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:03:53.123 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:03:53.123 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:53.123 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:53.123 21:25:38 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:03:53.123 21:25:38 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:03:53.123 21:25:38 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:53.382 21:25:38 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:53.382 21:25:38 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:53.382 21:25:38 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:53.382 21:25:38 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:53.382 21:25:38 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:03:53.382 21:25:38 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:53.382 21:25:38 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:53.382 21:25:38 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:53.382 21:25:38 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:53.382 21:25:38 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:53.382 21:25:38 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:53.382 21:25:38 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:53.382 21:25:38 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:03:53.382 21:25:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.382 21:25:38 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:53.382 21:25:38 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:53.382 21:25:38 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:53.382 21:25:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:53.382 21:25:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:03:53.382 21:25:38 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:53.382 21:25:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.382 21:25:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:53.382 21:25:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.641 21:25:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:53.641 21:25:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.641 21:25:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:53.641 21:25:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.641 21:25:38 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:53.641 21:25:38 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:03:53.641 21:25:38 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:53.641 21:25:38 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:53.641 21:25:38 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:53.641 21:25:38 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:53.900 21:25:38 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:03:53.900 21:25:38 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:03:53.900 21:25:38 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:03:53.900 21:25:38 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:53.900 21:25:38 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:03:53.900 21:25:38 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:53.900 21:25:38 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:53.900 21:25:38 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:53.900 21:25:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.900 21:25:38 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:03:53.900 21:25:38 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:53.900 21:25:38 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:53.900 21:25:38 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:54.159 21:25:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:54.159 21:25:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:03:54.159 21:25:38 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:54.159 21:25:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.159 21:25:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:54.159 21:25:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.159 21:25:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:54.159 21:25:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.159 21:25:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:54.159 21:25:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.419 21:25:39 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:54.419 21:25:39 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:54.419 21:25:39 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:03:54.419 21:25:39 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:03:54.419 21:25:39 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:54.419 21:25:39 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:54.419 21:25:39 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:54.419 21:25:39 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:54.419 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:54.419 00:03:54.419 real 0m3.975s 00:03:54.419 user 0m0.656s 00:03:54.419 sys 0m1.050s 00:03:54.419 21:25:39 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:54.419 21:25:39 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:03:54.419 ************************************ 00:03:54.419 END TEST nvme_mount 00:03:54.419 ************************************ 00:03:54.419 21:25:39 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:03:54.419 21:25:39 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:54.419 21:25:39 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:54.419 21:25:39 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:54.419 ************************************ 00:03:54.419 START TEST dm_mount 00:03:54.419 ************************************ 00:03:54.419 21:25:39 setup.sh.devices.dm_mount -- common/autotest_common.sh@1125 -- # dm_mount 00:03:54.419 21:25:39 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:03:54.419 21:25:39 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:03:54.419 21:25:39 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:03:54.419 21:25:39 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:03:54.419 21:25:39 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:54.419 21:25:39 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:03:54.419 21:25:39 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:54.419 21:25:39 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:54.419 21:25:39 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:03:54.419 21:25:39 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:03:54.419 21:25:39 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:54.419 21:25:39 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:54.419 21:25:39 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:54.419 21:25:39 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:54.419 21:25:39 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:54.419 21:25:39 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:54.419 21:25:39 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:54.419 21:25:39 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:54.419 21:25:39 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:03:54.419 21:25:39 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:54.419 21:25:39 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:03:55.356 Creating new GPT entries in memory. 00:03:55.356 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:55.356 other utilities. 00:03:55.356 21:25:40 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:55.356 21:25:40 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:55.356 21:25:40 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:55.356 21:25:40 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:55.356 21:25:40 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:03:56.733 Creating new GPT entries in memory. 00:03:56.733 The operation has completed successfully. 00:03:56.733 21:25:41 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:56.733 21:25:41 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:56.733 21:25:41 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:56.733 21:25:41 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:56.733 21:25:41 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:03:57.670 The operation has completed successfully. 00:03:57.670 21:25:42 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:57.670 21:25:42 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:57.670 21:25:42 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 57390 00:03:57.670 21:25:42 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:03:57.670 21:25:42 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:57.670 21:25:42 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:03:57.670 21:25:42 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:03:57.670 21:25:42 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:03:57.670 21:25:42 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:57.670 21:25:42 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:03:57.670 21:25:42 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:57.670 21:25:42 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:03:57.670 21:25:42 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:03:57.670 21:25:42 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:03:57.670 21:25:42 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:03:57.670 21:25:42 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:03:57.670 21:25:42 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:57.670 21:25:42 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:03:57.670 21:25:42 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:57.670 21:25:42 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:57.670 21:25:42 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:03:57.670 21:25:42 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:57.670 21:25:42 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:03:57.670 21:25:42 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:03:57.670 21:25:42 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:03:57.670 21:25:42 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:57.670 21:25:42 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:03:57.670 21:25:42 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:57.670 21:25:42 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:03:57.670 21:25:42 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:03:57.670 21:25:42 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:57.670 21:25:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.670 21:25:42 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:03:57.670 21:25:42 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:57.670 21:25:42 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:57.670 21:25:42 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:57.670 21:25:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:57.670 21:25:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:03:57.670 21:25:42 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:57.670 21:25:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.670 21:25:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:57.670 21:25:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.929 21:25:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:57.929 21:25:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.929 21:25:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:57.929 21:25:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.188 21:25:42 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:58.188 21:25:42 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:03:58.188 21:25:42 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:58.188 21:25:42 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:03:58.188 21:25:42 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:03:58.188 21:25:42 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:58.188 21:25:42 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:03:58.188 21:25:42 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:03:58.188 21:25:42 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:03:58.188 21:25:42 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:58.188 21:25:42 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:03:58.188 21:25:42 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:58.188 21:25:42 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:58.188 21:25:42 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:58.188 21:25:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.188 21:25:42 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:03:58.188 21:25:42 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:58.188 21:25:42 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:58.188 21:25:42 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:58.188 21:25:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:58.188 21:25:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:03:58.188 21:25:43 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:58.188 21:25:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.188 21:25:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:58.188 21:25:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.456 21:25:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:58.456 21:25:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.456 21:25:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:58.456 21:25:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.721 21:25:43 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:58.721 21:25:43 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:58.721 21:25:43 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:03:58.721 21:25:43 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:03:58.721 21:25:43 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:58.721 21:25:43 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:58.721 21:25:43 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:03:58.721 21:25:43 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:58.721 21:25:43 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:03:58.721 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:58.721 21:25:43 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:58.721 21:25:43 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:03:58.721 00:03:58.721 real 0m4.236s 00:03:58.721 user 0m0.490s 00:03:58.721 sys 0m0.697s 00:03:58.721 21:25:43 setup.sh.devices.dm_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:58.721 21:25:43 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:03:58.721 ************************************ 00:03:58.721 END TEST dm_mount 00:03:58.721 ************************************ 00:03:58.721 21:25:43 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:03:58.721 21:25:43 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:03:58.721 21:25:43 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:58.721 21:25:43 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:58.721 21:25:43 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:58.721 21:25:43 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:58.721 21:25:43 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:58.978 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:03:58.978 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:03:58.978 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:58.978 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:58.978 21:25:43 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:03:58.978 21:25:43 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:58.978 21:25:43 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:58.978 21:25:43 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:58.978 21:25:43 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:58.978 21:25:43 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:03:58.979 21:25:43 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:03:58.979 00:03:58.979 real 0m9.783s 00:03:58.979 user 0m1.822s 00:03:58.979 sys 0m2.349s 00:03:58.979 21:25:43 setup.sh.devices -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:58.979 21:25:43 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:58.979 ************************************ 00:03:58.979 END TEST devices 00:03:58.979 ************************************ 00:03:58.979 ************************************ 00:03:58.979 END TEST setup.sh 00:03:58.979 ************************************ 00:03:58.979 00:03:58.979 real 0m21.799s 00:03:58.979 user 0m7.118s 00:03:58.979 sys 0m8.976s 00:03:58.979 21:25:43 setup.sh -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:58.979 21:25:43 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:58.979 21:25:43 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:59.911 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:59.911 Hugepages 00:03:59.911 node hugesize free / total 00:03:59.911 node0 1048576kB 0 / 0 00:03:59.911 node0 2048kB 2048 / 2048 00:03:59.911 00:03:59.911 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:59.911 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:03:59.911 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:03:59.911 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:03:59.911 21:25:44 -- spdk/autotest.sh@130 -- # uname -s 00:03:59.911 21:25:44 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:03:59.911 21:25:44 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:03:59.911 21:25:44 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:00.477 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:00.735 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:00.735 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:00.735 21:25:45 -- common/autotest_common.sh@1532 -- # sleep 1 00:04:01.691 21:25:46 -- common/autotest_common.sh@1533 -- # bdfs=() 00:04:01.691 21:25:46 -- common/autotest_common.sh@1533 -- # local bdfs 00:04:01.691 21:25:46 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:04:01.691 21:25:46 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:04:01.691 21:25:46 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:01.691 21:25:46 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:01.691 21:25:46 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:01.691 21:25:46 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:01.691 21:25:46 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:01.949 21:25:46 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:04:01.949 21:25:46 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:01.949 21:25:46 -- common/autotest_common.sh@1536 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:02.207 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:02.207 Waiting for block devices as requested 00:04:02.207 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:02.465 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:02.465 21:25:47 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:02.465 21:25:47 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:02.465 21:25:47 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:02.465 21:25:47 -- common/autotest_common.sh@1502 -- # grep 0000:00:10.0/nvme/nvme 00:04:02.465 21:25:47 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:02.465 21:25:47 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:02.465 21:25:47 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:02.465 21:25:47 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme1 00:04:02.465 21:25:47 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme1 00:04:02.465 21:25:47 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme1 ]] 00:04:02.465 21:25:47 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme1 00:04:02.465 21:25:47 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:02.465 21:25:47 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:02.465 21:25:47 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:04:02.465 21:25:47 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:02.465 21:25:47 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:02.465 21:25:47 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme1 00:04:02.465 21:25:47 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:02.465 21:25:47 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:02.465 21:25:47 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:02.465 21:25:47 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:02.465 21:25:47 -- common/autotest_common.sh@1557 -- # continue 00:04:02.465 21:25:47 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:02.465 21:25:47 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:02.465 21:25:47 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:02.465 21:25:47 -- common/autotest_common.sh@1502 -- # grep 0000:00:11.0/nvme/nvme 00:04:02.465 21:25:47 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:02.465 21:25:47 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:02.465 21:25:47 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:02.465 21:25:47 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:04:02.465 21:25:47 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:04:02.465 21:25:47 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:04:02.465 21:25:47 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:04:02.465 21:25:47 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:02.465 21:25:47 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:02.465 21:25:47 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:04:02.465 21:25:47 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:02.465 21:25:47 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:02.465 21:25:47 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:04:02.465 21:25:47 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:02.465 21:25:47 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:02.465 21:25:47 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:02.465 21:25:47 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:02.465 21:25:47 -- common/autotest_common.sh@1557 -- # continue 00:04:02.465 21:25:47 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:02.465 21:25:47 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:02.466 21:25:47 -- common/autotest_common.sh@10 -- # set +x 00:04:02.466 21:25:47 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:02.466 21:25:47 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:02.466 21:25:47 -- common/autotest_common.sh@10 -- # set +x 00:04:02.466 21:25:47 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:03.032 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:03.290 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:03.290 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:03.290 21:25:48 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:03.290 21:25:48 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:03.290 21:25:48 -- common/autotest_common.sh@10 -- # set +x 00:04:03.291 21:25:48 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:03.291 21:25:48 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:04:03.291 21:25:48 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:04:03.291 21:25:48 -- common/autotest_common.sh@1577 -- # bdfs=() 00:04:03.291 21:25:48 -- common/autotest_common.sh@1577 -- # local bdfs 00:04:03.291 21:25:48 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:04:03.291 21:25:48 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:03.291 21:25:48 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:03.291 21:25:48 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:03.291 21:25:48 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:03.291 21:25:48 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:03.549 21:25:48 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:04:03.549 21:25:48 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:03.549 21:25:48 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:03.550 21:25:48 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:03.550 21:25:48 -- common/autotest_common.sh@1580 -- # device=0x0010 00:04:03.550 21:25:48 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:03.550 21:25:48 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:03.550 21:25:48 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:03.550 21:25:48 -- common/autotest_common.sh@1580 -- # device=0x0010 00:04:03.550 21:25:48 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:03.550 21:25:48 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:04:03.550 21:25:48 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:04:03.550 21:25:48 -- common/autotest_common.sh@1593 -- # return 0 00:04:03.550 21:25:48 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:03.550 21:25:48 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:03.550 21:25:48 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:03.550 21:25:48 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:03.550 21:25:48 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:03.550 21:25:48 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:03.550 21:25:48 -- common/autotest_common.sh@10 -- # set +x 00:04:03.550 21:25:48 -- spdk/autotest.sh@164 -- # [[ 1 -eq 1 ]] 00:04:03.550 21:25:48 -- spdk/autotest.sh@165 -- # export SPDK_SOCK_IMPL_DEFAULT=uring 00:04:03.550 21:25:48 -- spdk/autotest.sh@165 -- # SPDK_SOCK_IMPL_DEFAULT=uring 00:04:03.550 21:25:48 -- spdk/autotest.sh@168 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:03.550 21:25:48 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:03.550 21:25:48 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:03.550 21:25:48 -- common/autotest_common.sh@10 -- # set +x 00:04:03.550 ************************************ 00:04:03.550 START TEST env 00:04:03.550 ************************************ 00:04:03.550 21:25:48 env -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:03.550 * Looking for test storage... 00:04:03.550 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:03.550 21:25:48 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:03.550 21:25:48 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:03.550 21:25:48 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:03.550 21:25:48 env -- common/autotest_common.sh@10 -- # set +x 00:04:03.550 ************************************ 00:04:03.550 START TEST env_memory 00:04:03.550 ************************************ 00:04:03.550 21:25:48 env.env_memory -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:03.550 00:04:03.550 00:04:03.550 CUnit - A unit testing framework for C - Version 2.1-3 00:04:03.550 http://cunit.sourceforge.net/ 00:04:03.550 00:04:03.550 00:04:03.550 Suite: memory 00:04:03.550 Test: alloc and free memory map ...[2024-07-24 21:25:48.440833] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:03.550 passed 00:04:03.550 Test: mem map translation ...[2024-07-24 21:25:48.465491] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:03.550 [2024-07-24 21:25:48.465542] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:03.550 [2024-07-24 21:25:48.465591] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:03.550 [2024-07-24 21:25:48.465600] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:03.550 passed 00:04:03.550 Test: mem map registration ...[2024-07-24 21:25:48.516236] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:03.550 [2024-07-24 21:25:48.516288] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:03.550 passed 00:04:03.808 Test: mem map adjacent registrations ...passed 00:04:03.809 00:04:03.809 Run Summary: Type Total Ran Passed Failed Inactive 00:04:03.809 suites 1 1 n/a 0 0 00:04:03.809 tests 4 4 4 0 0 00:04:03.809 asserts 152 152 152 0 n/a 00:04:03.809 00:04:03.809 Elapsed time = 0.169 seconds 00:04:03.809 00:04:03.809 real 0m0.183s 00:04:03.809 user 0m0.167s 00:04:03.809 sys 0m0.013s 00:04:03.809 21:25:48 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:03.809 21:25:48 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:03.809 ************************************ 00:04:03.809 END TEST env_memory 00:04:03.809 ************************************ 00:04:03.809 21:25:48 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:03.809 21:25:48 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:03.809 21:25:48 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:03.809 21:25:48 env -- common/autotest_common.sh@10 -- # set +x 00:04:03.809 ************************************ 00:04:03.809 START TEST env_vtophys 00:04:03.809 ************************************ 00:04:03.809 21:25:48 env.env_vtophys -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:03.809 EAL: lib.eal log level changed from notice to debug 00:04:03.809 EAL: Detected lcore 0 as core 0 on socket 0 00:04:03.809 EAL: Detected lcore 1 as core 0 on socket 0 00:04:03.809 EAL: Detected lcore 2 as core 0 on socket 0 00:04:03.809 EAL: Detected lcore 3 as core 0 on socket 0 00:04:03.809 EAL: Detected lcore 4 as core 0 on socket 0 00:04:03.809 EAL: Detected lcore 5 as core 0 on socket 0 00:04:03.809 EAL: Detected lcore 6 as core 0 on socket 0 00:04:03.809 EAL: Detected lcore 7 as core 0 on socket 0 00:04:03.809 EAL: Detected lcore 8 as core 0 on socket 0 00:04:03.809 EAL: Detected lcore 9 as core 0 on socket 0 00:04:03.809 EAL: Maximum logical cores by configuration: 128 00:04:03.809 EAL: Detected CPU lcores: 10 00:04:03.809 EAL: Detected NUMA nodes: 1 00:04:03.809 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:03.809 EAL: Detected shared linkage of DPDK 00:04:03.809 EAL: No shared files mode enabled, IPC will be disabled 00:04:03.809 EAL: Selected IOVA mode 'PA' 00:04:03.809 EAL: Probing VFIO support... 00:04:03.809 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:03.809 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:03.809 EAL: Ask a virtual area of 0x2e000 bytes 00:04:03.809 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:03.809 EAL: Setting up physically contiguous memory... 00:04:03.809 EAL: Setting maximum number of open files to 524288 00:04:03.809 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:03.809 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:03.809 EAL: Ask a virtual area of 0x61000 bytes 00:04:03.809 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:03.809 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:03.809 EAL: Ask a virtual area of 0x400000000 bytes 00:04:03.809 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:03.809 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:03.809 EAL: Ask a virtual area of 0x61000 bytes 00:04:03.809 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:03.809 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:03.809 EAL: Ask a virtual area of 0x400000000 bytes 00:04:03.809 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:03.809 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:03.809 EAL: Ask a virtual area of 0x61000 bytes 00:04:03.809 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:03.809 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:03.809 EAL: Ask a virtual area of 0x400000000 bytes 00:04:03.809 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:03.809 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:03.809 EAL: Ask a virtual area of 0x61000 bytes 00:04:03.809 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:03.809 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:03.809 EAL: Ask a virtual area of 0x400000000 bytes 00:04:03.809 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:03.809 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:03.809 EAL: Hugepages will be freed exactly as allocated. 00:04:03.809 EAL: No shared files mode enabled, IPC is disabled 00:04:03.809 EAL: No shared files mode enabled, IPC is disabled 00:04:03.809 EAL: TSC frequency is ~2200000 KHz 00:04:03.809 EAL: Main lcore 0 is ready (tid=7fe37e6d2a00;cpuset=[0]) 00:04:03.809 EAL: Trying to obtain current memory policy. 00:04:03.809 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:03.809 EAL: Restoring previous memory policy: 0 00:04:03.809 EAL: request: mp_malloc_sync 00:04:03.809 EAL: No shared files mode enabled, IPC is disabled 00:04:03.809 EAL: Heap on socket 0 was expanded by 2MB 00:04:03.809 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:03.809 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:03.809 EAL: Mem event callback 'spdk:(nil)' registered 00:04:03.809 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:03.809 00:04:03.809 00:04:03.809 CUnit - A unit testing framework for C - Version 2.1-3 00:04:03.809 http://cunit.sourceforge.net/ 00:04:03.809 00:04:03.809 00:04:03.809 Suite: components_suite 00:04:03.809 Test: vtophys_malloc_test ...passed 00:04:03.809 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:03.809 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:03.809 EAL: Restoring previous memory policy: 4 00:04:03.809 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.809 EAL: request: mp_malloc_sync 00:04:03.809 EAL: No shared files mode enabled, IPC is disabled 00:04:03.809 EAL: Heap on socket 0 was expanded by 4MB 00:04:03.809 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.809 EAL: request: mp_malloc_sync 00:04:03.809 EAL: No shared files mode enabled, IPC is disabled 00:04:03.809 EAL: Heap on socket 0 was shrunk by 4MB 00:04:03.809 EAL: Trying to obtain current memory policy. 00:04:03.809 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:03.809 EAL: Restoring previous memory policy: 4 00:04:03.809 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.809 EAL: request: mp_malloc_sync 00:04:03.809 EAL: No shared files mode enabled, IPC is disabled 00:04:03.809 EAL: Heap on socket 0 was expanded by 6MB 00:04:03.809 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.809 EAL: request: mp_malloc_sync 00:04:03.809 EAL: No shared files mode enabled, IPC is disabled 00:04:03.809 EAL: Heap on socket 0 was shrunk by 6MB 00:04:03.809 EAL: Trying to obtain current memory policy. 00:04:03.809 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:03.809 EAL: Restoring previous memory policy: 4 00:04:03.809 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.809 EAL: request: mp_malloc_sync 00:04:03.809 EAL: No shared files mode enabled, IPC is disabled 00:04:03.809 EAL: Heap on socket 0 was expanded by 10MB 00:04:03.809 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.809 EAL: request: mp_malloc_sync 00:04:03.809 EAL: No shared files mode enabled, IPC is disabled 00:04:03.809 EAL: Heap on socket 0 was shrunk by 10MB 00:04:03.809 EAL: Trying to obtain current memory policy. 00:04:03.809 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:03.809 EAL: Restoring previous memory policy: 4 00:04:03.809 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.809 EAL: request: mp_malloc_sync 00:04:03.809 EAL: No shared files mode enabled, IPC is disabled 00:04:03.809 EAL: Heap on socket 0 was expanded by 18MB 00:04:03.809 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.809 EAL: request: mp_malloc_sync 00:04:03.809 EAL: No shared files mode enabled, IPC is disabled 00:04:03.809 EAL: Heap on socket 0 was shrunk by 18MB 00:04:03.809 EAL: Trying to obtain current memory policy. 00:04:03.809 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:04.069 EAL: Restoring previous memory policy: 4 00:04:04.069 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.069 EAL: request: mp_malloc_sync 00:04:04.069 EAL: No shared files mode enabled, IPC is disabled 00:04:04.069 EAL: Heap on socket 0 was expanded by 34MB 00:04:04.069 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.069 EAL: request: mp_malloc_sync 00:04:04.069 EAL: No shared files mode enabled, IPC is disabled 00:04:04.069 EAL: Heap on socket 0 was shrunk by 34MB 00:04:04.069 EAL: Trying to obtain current memory policy. 00:04:04.069 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:04.069 EAL: Restoring previous memory policy: 4 00:04:04.069 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.069 EAL: request: mp_malloc_sync 00:04:04.069 EAL: No shared files mode enabled, IPC is disabled 00:04:04.069 EAL: Heap on socket 0 was expanded by 66MB 00:04:04.069 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.069 EAL: request: mp_malloc_sync 00:04:04.069 EAL: No shared files mode enabled, IPC is disabled 00:04:04.069 EAL: Heap on socket 0 was shrunk by 66MB 00:04:04.069 EAL: Trying to obtain current memory policy. 00:04:04.069 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:04.069 EAL: Restoring previous memory policy: 4 00:04:04.069 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.069 EAL: request: mp_malloc_sync 00:04:04.069 EAL: No shared files mode enabled, IPC is disabled 00:04:04.069 EAL: Heap on socket 0 was expanded by 130MB 00:04:04.069 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.069 EAL: request: mp_malloc_sync 00:04:04.069 EAL: No shared files mode enabled, IPC is disabled 00:04:04.069 EAL: Heap on socket 0 was shrunk by 130MB 00:04:04.069 EAL: Trying to obtain current memory policy. 00:04:04.069 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:04.069 EAL: Restoring previous memory policy: 4 00:04:04.069 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.069 EAL: request: mp_malloc_sync 00:04:04.069 EAL: No shared files mode enabled, IPC is disabled 00:04:04.069 EAL: Heap on socket 0 was expanded by 258MB 00:04:04.069 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.328 EAL: request: mp_malloc_sync 00:04:04.328 EAL: No shared files mode enabled, IPC is disabled 00:04:04.328 EAL: Heap on socket 0 was shrunk by 258MB 00:04:04.328 EAL: Trying to obtain current memory policy. 00:04:04.328 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:04.328 EAL: Restoring previous memory policy: 4 00:04:04.328 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.328 EAL: request: mp_malloc_sync 00:04:04.328 EAL: No shared files mode enabled, IPC is disabled 00:04:04.328 EAL: Heap on socket 0 was expanded by 514MB 00:04:04.328 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.587 EAL: request: mp_malloc_sync 00:04:04.587 EAL: No shared files mode enabled, IPC is disabled 00:04:04.587 EAL: Heap on socket 0 was shrunk by 514MB 00:04:04.587 EAL: Trying to obtain current memory policy. 00:04:04.587 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:04.845 EAL: Restoring previous memory policy: 4 00:04:04.845 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.845 EAL: request: mp_malloc_sync 00:04:04.845 EAL: No shared files mode enabled, IPC is disabled 00:04:04.845 EAL: Heap on socket 0 was expanded by 1026MB 00:04:05.103 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.103 EAL: request: mp_malloc_sync 00:04:05.103 EAL: No shared files mode enabled, IPC is disabled 00:04:05.103 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:05.103 passed 00:04:05.103 00:04:05.103 Run Summary: Type Total Ran Passed Failed Inactive 00:04:05.103 suites 1 1 n/a 0 0 00:04:05.103 tests 2 2 2 0 0 00:04:05.103 asserts 5323 5323 5323 0 n/a 00:04:05.103 00:04:05.103 Elapsed time = 1.227 seconds 00:04:05.103 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.103 EAL: request: mp_malloc_sync 00:04:05.103 EAL: No shared files mode enabled, IPC is disabled 00:04:05.103 EAL: Heap on socket 0 was shrunk by 2MB 00:04:05.103 EAL: No shared files mode enabled, IPC is disabled 00:04:05.103 EAL: No shared files mode enabled, IPC is disabled 00:04:05.103 EAL: No shared files mode enabled, IPC is disabled 00:04:05.103 00:04:05.103 real 0m1.426s 00:04:05.103 user 0m0.778s 00:04:05.103 sys 0m0.513s 00:04:05.103 21:25:50 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:05.103 ************************************ 00:04:05.103 END TEST env_vtophys 00:04:05.103 ************************************ 00:04:05.103 21:25:50 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:05.103 21:25:50 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:05.103 21:25:50 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:05.103 21:25:50 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:05.103 21:25:50 env -- common/autotest_common.sh@10 -- # set +x 00:04:05.103 ************************************ 00:04:05.103 START TEST env_pci 00:04:05.103 ************************************ 00:04:05.361 21:25:50 env.env_pci -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:05.361 00:04:05.361 00:04:05.361 CUnit - A unit testing framework for C - Version 2.1-3 00:04:05.361 http://cunit.sourceforge.net/ 00:04:05.361 00:04:05.361 00:04:05.361 Suite: pci 00:04:05.361 Test: pci_hook ...[2024-07-24 21:25:50.116422] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 58589 has claimed it 00:04:05.361 passed 00:04:05.361 00:04:05.361 Run Summary: Type Total Ran Passed Failed Inactive 00:04:05.361 suites 1 1 n/a 0 0 00:04:05.361 tests 1 1 1 0 0 00:04:05.361 asserts 25 25 25 0 n/a 00:04:05.361 00:04:05.361 Elapsed time = 0.002 seconds 00:04:05.361 EAL: Cannot find device (10000:00:01.0) 00:04:05.361 EAL: Failed to attach device on primary process 00:04:05.361 00:04:05.361 real 0m0.020s 00:04:05.361 user 0m0.009s 00:04:05.361 sys 0m0.011s 00:04:05.361 21:25:50 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:05.361 21:25:50 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:05.361 ************************************ 00:04:05.361 END TEST env_pci 00:04:05.361 ************************************ 00:04:05.361 21:25:50 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:05.361 21:25:50 env -- env/env.sh@15 -- # uname 00:04:05.361 21:25:50 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:05.361 21:25:50 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:05.361 21:25:50 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:05.361 21:25:50 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:04:05.361 21:25:50 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:05.361 21:25:50 env -- common/autotest_common.sh@10 -- # set +x 00:04:05.361 ************************************ 00:04:05.361 START TEST env_dpdk_post_init 00:04:05.361 ************************************ 00:04:05.361 21:25:50 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:05.361 EAL: Detected CPU lcores: 10 00:04:05.361 EAL: Detected NUMA nodes: 1 00:04:05.361 EAL: Detected shared linkage of DPDK 00:04:05.361 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:05.361 EAL: Selected IOVA mode 'PA' 00:04:05.361 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:05.361 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:05.361 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:05.361 Starting DPDK initialization... 00:04:05.361 Starting SPDK post initialization... 00:04:05.361 SPDK NVMe probe 00:04:05.361 Attaching to 0000:00:10.0 00:04:05.361 Attaching to 0000:00:11.0 00:04:05.361 Attached to 0000:00:10.0 00:04:05.361 Attached to 0000:00:11.0 00:04:05.361 Cleaning up... 00:04:05.361 00:04:05.361 real 0m0.171s 00:04:05.361 user 0m0.045s 00:04:05.361 sys 0m0.026s 00:04:05.361 21:25:50 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:05.361 ************************************ 00:04:05.361 END TEST env_dpdk_post_init 00:04:05.361 21:25:50 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:05.361 ************************************ 00:04:05.619 21:25:50 env -- env/env.sh@26 -- # uname 00:04:05.619 21:25:50 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:05.619 21:25:50 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:05.619 21:25:50 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:05.619 21:25:50 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:05.619 21:25:50 env -- common/autotest_common.sh@10 -- # set +x 00:04:05.619 ************************************ 00:04:05.619 START TEST env_mem_callbacks 00:04:05.619 ************************************ 00:04:05.619 21:25:50 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:05.619 EAL: Detected CPU lcores: 10 00:04:05.619 EAL: Detected NUMA nodes: 1 00:04:05.619 EAL: Detected shared linkage of DPDK 00:04:05.619 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:05.619 EAL: Selected IOVA mode 'PA' 00:04:05.619 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:05.619 00:04:05.619 00:04:05.619 CUnit - A unit testing framework for C - Version 2.1-3 00:04:05.619 http://cunit.sourceforge.net/ 00:04:05.619 00:04:05.619 00:04:05.619 Suite: memory 00:04:05.619 Test: test ... 00:04:05.619 register 0x200000200000 2097152 00:04:05.619 malloc 3145728 00:04:05.619 register 0x200000400000 4194304 00:04:05.619 buf 0x200000500000 len 3145728 PASSED 00:04:05.619 malloc 64 00:04:05.619 buf 0x2000004fff40 len 64 PASSED 00:04:05.619 malloc 4194304 00:04:05.619 register 0x200000800000 6291456 00:04:05.619 buf 0x200000a00000 len 4194304 PASSED 00:04:05.619 free 0x200000500000 3145728 00:04:05.619 free 0x2000004fff40 64 00:04:05.619 unregister 0x200000400000 4194304 PASSED 00:04:05.619 free 0x200000a00000 4194304 00:04:05.619 unregister 0x200000800000 6291456 PASSED 00:04:05.619 malloc 8388608 00:04:05.619 register 0x200000400000 10485760 00:04:05.619 buf 0x200000600000 len 8388608 PASSED 00:04:05.619 free 0x200000600000 8388608 00:04:05.619 unregister 0x200000400000 10485760 PASSED 00:04:05.619 passed 00:04:05.619 00:04:05.619 Run Summary: Type Total Ran Passed Failed Inactive 00:04:05.619 suites 1 1 n/a 0 0 00:04:05.619 tests 1 1 1 0 0 00:04:05.619 asserts 15 15 15 0 n/a 00:04:05.619 00:04:05.619 Elapsed time = 0.007 seconds 00:04:05.619 00:04:05.619 real 0m0.144s 00:04:05.619 user 0m0.020s 00:04:05.619 sys 0m0.024s 00:04:05.619 21:25:50 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:05.619 21:25:50 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:05.619 ************************************ 00:04:05.619 END TEST env_mem_callbacks 00:04:05.620 ************************************ 00:04:05.620 00:04:05.620 real 0m2.252s 00:04:05.620 user 0m1.125s 00:04:05.620 sys 0m0.770s 00:04:05.620 21:25:50 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:05.620 21:25:50 env -- common/autotest_common.sh@10 -- # set +x 00:04:05.620 ************************************ 00:04:05.620 END TEST env 00:04:05.620 ************************************ 00:04:05.620 21:25:50 -- spdk/autotest.sh@169 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:05.620 21:25:50 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:05.620 21:25:50 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:05.620 21:25:50 -- common/autotest_common.sh@10 -- # set +x 00:04:05.620 ************************************ 00:04:05.620 START TEST rpc 00:04:05.620 ************************************ 00:04:05.620 21:25:50 rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:05.878 * Looking for test storage... 00:04:05.878 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:05.878 21:25:50 rpc -- rpc/rpc.sh@65 -- # spdk_pid=58693 00:04:05.878 21:25:50 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:05.878 21:25:50 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:05.878 21:25:50 rpc -- rpc/rpc.sh@67 -- # waitforlisten 58693 00:04:05.878 21:25:50 rpc -- common/autotest_common.sh@831 -- # '[' -z 58693 ']' 00:04:05.878 21:25:50 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:05.878 21:25:50 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:05.878 21:25:50 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:05.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:05.878 21:25:50 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:05.878 21:25:50 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:05.878 [2024-07-24 21:25:50.748402] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:04:05.878 [2024-07-24 21:25:50.748487] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58693 ] 00:04:06.136 [2024-07-24 21:25:50.883524] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:06.136 [2024-07-24 21:25:51.000899] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:06.136 [2024-07-24 21:25:51.000947] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 58693' to capture a snapshot of events at runtime. 00:04:06.136 [2024-07-24 21:25:51.000963] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:06.136 [2024-07-24 21:25:51.000972] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:06.136 [2024-07-24 21:25:51.000979] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid58693 for offline analysis/debug. 00:04:06.136 [2024-07-24 21:25:51.001012] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:06.136 [2024-07-24 21:25:51.054167] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:07.074 21:25:51 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:07.074 21:25:51 rpc -- common/autotest_common.sh@864 -- # return 0 00:04:07.074 21:25:51 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:07.074 21:25:51 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:07.074 21:25:51 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:07.074 21:25:51 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:07.074 21:25:51 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:07.074 21:25:51 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:07.074 21:25:51 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:07.074 ************************************ 00:04:07.074 START TEST rpc_integrity 00:04:07.074 ************************************ 00:04:07.074 21:25:51 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:07.074 21:25:51 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:07.074 21:25:51 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:07.074 21:25:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.074 21:25:51 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:07.074 21:25:51 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:07.074 21:25:51 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:07.074 21:25:51 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:07.074 21:25:51 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:07.074 21:25:51 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:07.074 21:25:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.074 21:25:51 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:07.074 21:25:51 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:07.074 21:25:51 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:07.074 21:25:51 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:07.074 21:25:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.074 21:25:51 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:07.074 21:25:51 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:07.074 { 00:04:07.074 "name": "Malloc0", 00:04:07.074 "aliases": [ 00:04:07.074 "cf44e5e4-f70e-4b08-94ee-7baa614c9748" 00:04:07.074 ], 00:04:07.074 "product_name": "Malloc disk", 00:04:07.074 "block_size": 512, 00:04:07.074 "num_blocks": 16384, 00:04:07.074 "uuid": "cf44e5e4-f70e-4b08-94ee-7baa614c9748", 00:04:07.074 "assigned_rate_limits": { 00:04:07.074 "rw_ios_per_sec": 0, 00:04:07.074 "rw_mbytes_per_sec": 0, 00:04:07.074 "r_mbytes_per_sec": 0, 00:04:07.074 "w_mbytes_per_sec": 0 00:04:07.074 }, 00:04:07.074 "claimed": false, 00:04:07.074 "zoned": false, 00:04:07.074 "supported_io_types": { 00:04:07.074 "read": true, 00:04:07.074 "write": true, 00:04:07.074 "unmap": true, 00:04:07.074 "flush": true, 00:04:07.074 "reset": true, 00:04:07.074 "nvme_admin": false, 00:04:07.074 "nvme_io": false, 00:04:07.074 "nvme_io_md": false, 00:04:07.074 "write_zeroes": true, 00:04:07.074 "zcopy": true, 00:04:07.074 "get_zone_info": false, 00:04:07.074 "zone_management": false, 00:04:07.074 "zone_append": false, 00:04:07.074 "compare": false, 00:04:07.074 "compare_and_write": false, 00:04:07.074 "abort": true, 00:04:07.074 "seek_hole": false, 00:04:07.074 "seek_data": false, 00:04:07.074 "copy": true, 00:04:07.074 "nvme_iov_md": false 00:04:07.074 }, 00:04:07.074 "memory_domains": [ 00:04:07.074 { 00:04:07.074 "dma_device_id": "system", 00:04:07.074 "dma_device_type": 1 00:04:07.074 }, 00:04:07.074 { 00:04:07.074 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:07.074 "dma_device_type": 2 00:04:07.074 } 00:04:07.074 ], 00:04:07.074 "driver_specific": {} 00:04:07.074 } 00:04:07.074 ]' 00:04:07.074 21:25:51 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:07.074 21:25:51 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:07.074 21:25:51 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:07.074 21:25:51 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:07.074 21:25:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.074 [2024-07-24 21:25:51.952653] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:07.074 [2024-07-24 21:25:51.952736] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:07.074 [2024-07-24 21:25:51.952762] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x201fda0 00:04:07.074 [2024-07-24 21:25:51.952775] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:07.074 [2024-07-24 21:25:51.954650] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:07.074 [2024-07-24 21:25:51.954684] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:07.074 Passthru0 00:04:07.074 21:25:51 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:07.074 21:25:51 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:07.074 21:25:51 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:07.074 21:25:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.074 21:25:51 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:07.074 21:25:51 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:07.074 { 00:04:07.074 "name": "Malloc0", 00:04:07.074 "aliases": [ 00:04:07.074 "cf44e5e4-f70e-4b08-94ee-7baa614c9748" 00:04:07.074 ], 00:04:07.074 "product_name": "Malloc disk", 00:04:07.074 "block_size": 512, 00:04:07.074 "num_blocks": 16384, 00:04:07.074 "uuid": "cf44e5e4-f70e-4b08-94ee-7baa614c9748", 00:04:07.074 "assigned_rate_limits": { 00:04:07.074 "rw_ios_per_sec": 0, 00:04:07.074 "rw_mbytes_per_sec": 0, 00:04:07.074 "r_mbytes_per_sec": 0, 00:04:07.074 "w_mbytes_per_sec": 0 00:04:07.074 }, 00:04:07.074 "claimed": true, 00:04:07.074 "claim_type": "exclusive_write", 00:04:07.074 "zoned": false, 00:04:07.074 "supported_io_types": { 00:04:07.074 "read": true, 00:04:07.074 "write": true, 00:04:07.074 "unmap": true, 00:04:07.074 "flush": true, 00:04:07.074 "reset": true, 00:04:07.074 "nvme_admin": false, 00:04:07.074 "nvme_io": false, 00:04:07.074 "nvme_io_md": false, 00:04:07.074 "write_zeroes": true, 00:04:07.074 "zcopy": true, 00:04:07.074 "get_zone_info": false, 00:04:07.074 "zone_management": false, 00:04:07.074 "zone_append": false, 00:04:07.074 "compare": false, 00:04:07.074 "compare_and_write": false, 00:04:07.074 "abort": true, 00:04:07.074 "seek_hole": false, 00:04:07.074 "seek_data": false, 00:04:07.074 "copy": true, 00:04:07.074 "nvme_iov_md": false 00:04:07.074 }, 00:04:07.074 "memory_domains": [ 00:04:07.074 { 00:04:07.074 "dma_device_id": "system", 00:04:07.074 "dma_device_type": 1 00:04:07.074 }, 00:04:07.074 { 00:04:07.074 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:07.074 "dma_device_type": 2 00:04:07.074 } 00:04:07.074 ], 00:04:07.074 "driver_specific": {} 00:04:07.074 }, 00:04:07.074 { 00:04:07.074 "name": "Passthru0", 00:04:07.074 "aliases": [ 00:04:07.074 "e4bc7351-422e-5abd-b1f9-6a467c2e6cd8" 00:04:07.074 ], 00:04:07.074 "product_name": "passthru", 00:04:07.074 "block_size": 512, 00:04:07.074 "num_blocks": 16384, 00:04:07.074 "uuid": "e4bc7351-422e-5abd-b1f9-6a467c2e6cd8", 00:04:07.074 "assigned_rate_limits": { 00:04:07.074 "rw_ios_per_sec": 0, 00:04:07.074 "rw_mbytes_per_sec": 0, 00:04:07.074 "r_mbytes_per_sec": 0, 00:04:07.074 "w_mbytes_per_sec": 0 00:04:07.074 }, 00:04:07.074 "claimed": false, 00:04:07.074 "zoned": false, 00:04:07.074 "supported_io_types": { 00:04:07.074 "read": true, 00:04:07.074 "write": true, 00:04:07.074 "unmap": true, 00:04:07.074 "flush": true, 00:04:07.074 "reset": true, 00:04:07.074 "nvme_admin": false, 00:04:07.074 "nvme_io": false, 00:04:07.074 "nvme_io_md": false, 00:04:07.074 "write_zeroes": true, 00:04:07.074 "zcopy": true, 00:04:07.074 "get_zone_info": false, 00:04:07.074 "zone_management": false, 00:04:07.074 "zone_append": false, 00:04:07.074 "compare": false, 00:04:07.075 "compare_and_write": false, 00:04:07.075 "abort": true, 00:04:07.075 "seek_hole": false, 00:04:07.075 "seek_data": false, 00:04:07.075 "copy": true, 00:04:07.075 "nvme_iov_md": false 00:04:07.075 }, 00:04:07.075 "memory_domains": [ 00:04:07.075 { 00:04:07.075 "dma_device_id": "system", 00:04:07.075 "dma_device_type": 1 00:04:07.075 }, 00:04:07.075 { 00:04:07.075 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:07.075 "dma_device_type": 2 00:04:07.075 } 00:04:07.075 ], 00:04:07.075 "driver_specific": { 00:04:07.075 "passthru": { 00:04:07.075 "name": "Passthru0", 00:04:07.075 "base_bdev_name": "Malloc0" 00:04:07.075 } 00:04:07.075 } 00:04:07.075 } 00:04:07.075 ]' 00:04:07.075 21:25:51 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:07.075 21:25:52 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:07.075 21:25:52 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:07.075 21:25:52 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:07.075 21:25:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.075 21:25:52 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:07.075 21:25:52 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:07.075 21:25:52 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:07.075 21:25:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.075 21:25:52 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:07.075 21:25:52 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:07.075 21:25:52 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:07.075 21:25:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.075 21:25:52 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:07.075 21:25:52 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:07.075 21:25:52 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:07.333 21:25:52 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:07.333 00:04:07.333 real 0m0.322s 00:04:07.333 user 0m0.217s 00:04:07.333 sys 0m0.037s 00:04:07.333 21:25:52 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:07.333 21:25:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.333 ************************************ 00:04:07.333 END TEST rpc_integrity 00:04:07.333 ************************************ 00:04:07.334 21:25:52 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:07.334 21:25:52 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:07.334 21:25:52 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:07.334 21:25:52 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:07.334 ************************************ 00:04:07.334 START TEST rpc_plugins 00:04:07.334 ************************************ 00:04:07.334 21:25:52 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:04:07.334 21:25:52 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:07.334 21:25:52 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:07.334 21:25:52 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:07.334 21:25:52 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:07.334 21:25:52 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:07.334 21:25:52 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:07.334 21:25:52 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:07.334 21:25:52 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:07.334 21:25:52 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:07.334 21:25:52 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:07.334 { 00:04:07.334 "name": "Malloc1", 00:04:07.334 "aliases": [ 00:04:07.334 "7823bb05-d2c0-4c5b-a004-a696292ecc8f" 00:04:07.334 ], 00:04:07.334 "product_name": "Malloc disk", 00:04:07.334 "block_size": 4096, 00:04:07.334 "num_blocks": 256, 00:04:07.334 "uuid": "7823bb05-d2c0-4c5b-a004-a696292ecc8f", 00:04:07.334 "assigned_rate_limits": { 00:04:07.334 "rw_ios_per_sec": 0, 00:04:07.334 "rw_mbytes_per_sec": 0, 00:04:07.334 "r_mbytes_per_sec": 0, 00:04:07.334 "w_mbytes_per_sec": 0 00:04:07.334 }, 00:04:07.334 "claimed": false, 00:04:07.334 "zoned": false, 00:04:07.334 "supported_io_types": { 00:04:07.334 "read": true, 00:04:07.334 "write": true, 00:04:07.334 "unmap": true, 00:04:07.334 "flush": true, 00:04:07.334 "reset": true, 00:04:07.334 "nvme_admin": false, 00:04:07.334 "nvme_io": false, 00:04:07.334 "nvme_io_md": false, 00:04:07.334 "write_zeroes": true, 00:04:07.334 "zcopy": true, 00:04:07.334 "get_zone_info": false, 00:04:07.334 "zone_management": false, 00:04:07.334 "zone_append": false, 00:04:07.334 "compare": false, 00:04:07.334 "compare_and_write": false, 00:04:07.334 "abort": true, 00:04:07.334 "seek_hole": false, 00:04:07.334 "seek_data": false, 00:04:07.334 "copy": true, 00:04:07.334 "nvme_iov_md": false 00:04:07.334 }, 00:04:07.334 "memory_domains": [ 00:04:07.334 { 00:04:07.334 "dma_device_id": "system", 00:04:07.334 "dma_device_type": 1 00:04:07.334 }, 00:04:07.334 { 00:04:07.334 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:07.334 "dma_device_type": 2 00:04:07.334 } 00:04:07.334 ], 00:04:07.334 "driver_specific": {} 00:04:07.334 } 00:04:07.334 ]' 00:04:07.334 21:25:52 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:07.334 21:25:52 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:07.334 21:25:52 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:07.334 21:25:52 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:07.334 21:25:52 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:07.334 21:25:52 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:07.334 21:25:52 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:07.334 21:25:52 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:07.334 21:25:52 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:07.334 21:25:52 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:07.334 21:25:52 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:07.334 21:25:52 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:07.334 21:25:52 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:07.334 00:04:07.334 real 0m0.162s 00:04:07.334 user 0m0.112s 00:04:07.334 sys 0m0.015s 00:04:07.334 21:25:52 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:07.334 21:25:52 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:07.334 ************************************ 00:04:07.334 END TEST rpc_plugins 00:04:07.334 ************************************ 00:04:07.593 21:25:52 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:07.593 21:25:52 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:07.593 21:25:52 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:07.593 21:25:52 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:07.593 ************************************ 00:04:07.593 START TEST rpc_trace_cmd_test 00:04:07.593 ************************************ 00:04:07.593 21:25:52 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:04:07.593 21:25:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:07.593 21:25:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:07.593 21:25:52 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:07.593 21:25:52 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:07.593 21:25:52 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:07.593 21:25:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:07.593 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid58693", 00:04:07.593 "tpoint_group_mask": "0x8", 00:04:07.593 "iscsi_conn": { 00:04:07.593 "mask": "0x2", 00:04:07.593 "tpoint_mask": "0x0" 00:04:07.593 }, 00:04:07.593 "scsi": { 00:04:07.593 "mask": "0x4", 00:04:07.593 "tpoint_mask": "0x0" 00:04:07.593 }, 00:04:07.593 "bdev": { 00:04:07.593 "mask": "0x8", 00:04:07.593 "tpoint_mask": "0xffffffffffffffff" 00:04:07.593 }, 00:04:07.593 "nvmf_rdma": { 00:04:07.593 "mask": "0x10", 00:04:07.593 "tpoint_mask": "0x0" 00:04:07.593 }, 00:04:07.593 "nvmf_tcp": { 00:04:07.593 "mask": "0x20", 00:04:07.593 "tpoint_mask": "0x0" 00:04:07.593 }, 00:04:07.593 "ftl": { 00:04:07.593 "mask": "0x40", 00:04:07.593 "tpoint_mask": "0x0" 00:04:07.593 }, 00:04:07.593 "blobfs": { 00:04:07.593 "mask": "0x80", 00:04:07.593 "tpoint_mask": "0x0" 00:04:07.593 }, 00:04:07.593 "dsa": { 00:04:07.593 "mask": "0x200", 00:04:07.593 "tpoint_mask": "0x0" 00:04:07.593 }, 00:04:07.593 "thread": { 00:04:07.593 "mask": "0x400", 00:04:07.593 "tpoint_mask": "0x0" 00:04:07.593 }, 00:04:07.593 "nvme_pcie": { 00:04:07.593 "mask": "0x800", 00:04:07.593 "tpoint_mask": "0x0" 00:04:07.593 }, 00:04:07.593 "iaa": { 00:04:07.593 "mask": "0x1000", 00:04:07.593 "tpoint_mask": "0x0" 00:04:07.593 }, 00:04:07.593 "nvme_tcp": { 00:04:07.593 "mask": "0x2000", 00:04:07.593 "tpoint_mask": "0x0" 00:04:07.593 }, 00:04:07.593 "bdev_nvme": { 00:04:07.593 "mask": "0x4000", 00:04:07.593 "tpoint_mask": "0x0" 00:04:07.593 }, 00:04:07.593 "sock": { 00:04:07.593 "mask": "0x8000", 00:04:07.593 "tpoint_mask": "0x0" 00:04:07.593 } 00:04:07.593 }' 00:04:07.593 21:25:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:07.593 21:25:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:07.593 21:25:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:07.593 21:25:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:07.593 21:25:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:07.593 21:25:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:07.593 21:25:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:07.852 21:25:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:07.852 21:25:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:07.852 21:25:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:07.852 00:04:07.852 real 0m0.273s 00:04:07.852 user 0m0.235s 00:04:07.852 sys 0m0.031s 00:04:07.852 21:25:52 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:07.852 21:25:52 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:07.852 ************************************ 00:04:07.852 END TEST rpc_trace_cmd_test 00:04:07.852 ************************************ 00:04:07.852 21:25:52 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:07.852 21:25:52 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:07.852 21:25:52 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:07.852 21:25:52 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:07.852 21:25:52 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:07.852 21:25:52 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:07.852 ************************************ 00:04:07.852 START TEST rpc_daemon_integrity 00:04:07.852 ************************************ 00:04:07.852 21:25:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:07.852 21:25:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:07.852 21:25:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:07.852 21:25:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.852 21:25:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:07.852 21:25:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:07.852 21:25:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:07.852 21:25:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:07.852 21:25:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:07.852 21:25:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:07.852 21:25:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.852 21:25:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:07.852 21:25:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:07.852 21:25:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:07.852 21:25:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:07.852 21:25:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.852 21:25:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:07.852 21:25:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:07.852 { 00:04:07.852 "name": "Malloc2", 00:04:07.852 "aliases": [ 00:04:07.852 "f22420e6-1322-4e63-8582-269b8b906801" 00:04:07.852 ], 00:04:07.852 "product_name": "Malloc disk", 00:04:07.852 "block_size": 512, 00:04:07.852 "num_blocks": 16384, 00:04:07.852 "uuid": "f22420e6-1322-4e63-8582-269b8b906801", 00:04:07.852 "assigned_rate_limits": { 00:04:07.852 "rw_ios_per_sec": 0, 00:04:07.852 "rw_mbytes_per_sec": 0, 00:04:07.852 "r_mbytes_per_sec": 0, 00:04:07.852 "w_mbytes_per_sec": 0 00:04:07.852 }, 00:04:07.852 "claimed": false, 00:04:07.852 "zoned": false, 00:04:07.852 "supported_io_types": { 00:04:07.852 "read": true, 00:04:07.852 "write": true, 00:04:07.852 "unmap": true, 00:04:07.852 "flush": true, 00:04:07.852 "reset": true, 00:04:07.852 "nvme_admin": false, 00:04:07.852 "nvme_io": false, 00:04:07.852 "nvme_io_md": false, 00:04:07.852 "write_zeroes": true, 00:04:07.852 "zcopy": true, 00:04:07.852 "get_zone_info": false, 00:04:07.852 "zone_management": false, 00:04:07.852 "zone_append": false, 00:04:07.852 "compare": false, 00:04:07.852 "compare_and_write": false, 00:04:07.852 "abort": true, 00:04:07.852 "seek_hole": false, 00:04:07.852 "seek_data": false, 00:04:07.852 "copy": true, 00:04:07.852 "nvme_iov_md": false 00:04:07.852 }, 00:04:07.852 "memory_domains": [ 00:04:07.852 { 00:04:07.852 "dma_device_id": "system", 00:04:07.852 "dma_device_type": 1 00:04:07.852 }, 00:04:07.852 { 00:04:07.852 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:07.852 "dma_device_type": 2 00:04:07.852 } 00:04:07.852 ], 00:04:07.852 "driver_specific": {} 00:04:07.852 } 00:04:07.852 ]' 00:04:07.852 21:25:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:07.852 21:25:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:07.852 21:25:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:07.852 21:25:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:07.852 21:25:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.852 [2024-07-24 21:25:52.842562] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:07.852 [2024-07-24 21:25:52.842654] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:07.852 [2024-07-24 21:25:52.842683] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2084be0 00:04:07.852 [2024-07-24 21:25:52.842695] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:07.852 [2024-07-24 21:25:52.844145] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:07.852 [2024-07-24 21:25:52.844181] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:07.852 Passthru0 00:04:07.852 21:25:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:07.852 21:25:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:07.852 21:25:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:08.150 21:25:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:08.150 21:25:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:08.150 21:25:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:08.150 { 00:04:08.150 "name": "Malloc2", 00:04:08.150 "aliases": [ 00:04:08.150 "f22420e6-1322-4e63-8582-269b8b906801" 00:04:08.150 ], 00:04:08.150 "product_name": "Malloc disk", 00:04:08.150 "block_size": 512, 00:04:08.150 "num_blocks": 16384, 00:04:08.150 "uuid": "f22420e6-1322-4e63-8582-269b8b906801", 00:04:08.150 "assigned_rate_limits": { 00:04:08.150 "rw_ios_per_sec": 0, 00:04:08.150 "rw_mbytes_per_sec": 0, 00:04:08.150 "r_mbytes_per_sec": 0, 00:04:08.150 "w_mbytes_per_sec": 0 00:04:08.150 }, 00:04:08.150 "claimed": true, 00:04:08.150 "claim_type": "exclusive_write", 00:04:08.150 "zoned": false, 00:04:08.150 "supported_io_types": { 00:04:08.150 "read": true, 00:04:08.150 "write": true, 00:04:08.150 "unmap": true, 00:04:08.150 "flush": true, 00:04:08.150 "reset": true, 00:04:08.150 "nvme_admin": false, 00:04:08.150 "nvme_io": false, 00:04:08.150 "nvme_io_md": false, 00:04:08.150 "write_zeroes": true, 00:04:08.150 "zcopy": true, 00:04:08.150 "get_zone_info": false, 00:04:08.150 "zone_management": false, 00:04:08.150 "zone_append": false, 00:04:08.150 "compare": false, 00:04:08.150 "compare_and_write": false, 00:04:08.150 "abort": true, 00:04:08.150 "seek_hole": false, 00:04:08.150 "seek_data": false, 00:04:08.150 "copy": true, 00:04:08.150 "nvme_iov_md": false 00:04:08.150 }, 00:04:08.150 "memory_domains": [ 00:04:08.150 { 00:04:08.150 "dma_device_id": "system", 00:04:08.150 "dma_device_type": 1 00:04:08.150 }, 00:04:08.150 { 00:04:08.150 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:08.150 "dma_device_type": 2 00:04:08.150 } 00:04:08.150 ], 00:04:08.150 "driver_specific": {} 00:04:08.150 }, 00:04:08.150 { 00:04:08.150 "name": "Passthru0", 00:04:08.150 "aliases": [ 00:04:08.150 "595daff4-d1c3-5857-a2b0-47a8f56dd7c9" 00:04:08.150 ], 00:04:08.150 "product_name": "passthru", 00:04:08.150 "block_size": 512, 00:04:08.150 "num_blocks": 16384, 00:04:08.150 "uuid": "595daff4-d1c3-5857-a2b0-47a8f56dd7c9", 00:04:08.150 "assigned_rate_limits": { 00:04:08.150 "rw_ios_per_sec": 0, 00:04:08.150 "rw_mbytes_per_sec": 0, 00:04:08.150 "r_mbytes_per_sec": 0, 00:04:08.150 "w_mbytes_per_sec": 0 00:04:08.150 }, 00:04:08.150 "claimed": false, 00:04:08.150 "zoned": false, 00:04:08.150 "supported_io_types": { 00:04:08.150 "read": true, 00:04:08.150 "write": true, 00:04:08.150 "unmap": true, 00:04:08.150 "flush": true, 00:04:08.150 "reset": true, 00:04:08.150 "nvme_admin": false, 00:04:08.150 "nvme_io": false, 00:04:08.150 "nvme_io_md": false, 00:04:08.150 "write_zeroes": true, 00:04:08.150 "zcopy": true, 00:04:08.150 "get_zone_info": false, 00:04:08.150 "zone_management": false, 00:04:08.150 "zone_append": false, 00:04:08.150 "compare": false, 00:04:08.150 "compare_and_write": false, 00:04:08.150 "abort": true, 00:04:08.150 "seek_hole": false, 00:04:08.150 "seek_data": false, 00:04:08.150 "copy": true, 00:04:08.150 "nvme_iov_md": false 00:04:08.150 }, 00:04:08.150 "memory_domains": [ 00:04:08.150 { 00:04:08.150 "dma_device_id": "system", 00:04:08.150 "dma_device_type": 1 00:04:08.150 }, 00:04:08.150 { 00:04:08.150 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:08.150 "dma_device_type": 2 00:04:08.150 } 00:04:08.150 ], 00:04:08.150 "driver_specific": { 00:04:08.150 "passthru": { 00:04:08.150 "name": "Passthru0", 00:04:08.150 "base_bdev_name": "Malloc2" 00:04:08.150 } 00:04:08.150 } 00:04:08.150 } 00:04:08.150 ]' 00:04:08.150 21:25:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:08.150 21:25:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:08.151 21:25:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:08.151 21:25:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:08.151 21:25:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:08.151 21:25:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:08.151 21:25:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:08.151 21:25:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:08.151 21:25:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:08.151 21:25:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:08.151 21:25:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:08.151 21:25:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:08.151 21:25:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:08.151 21:25:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:08.151 21:25:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:08.151 21:25:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:08.151 21:25:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:08.151 00:04:08.151 real 0m0.324s 00:04:08.151 user 0m0.218s 00:04:08.151 sys 0m0.040s 00:04:08.151 21:25:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:08.151 21:25:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:08.151 ************************************ 00:04:08.151 END TEST rpc_daemon_integrity 00:04:08.151 ************************************ 00:04:08.151 21:25:53 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:08.151 21:25:53 rpc -- rpc/rpc.sh@84 -- # killprocess 58693 00:04:08.151 21:25:53 rpc -- common/autotest_common.sh@950 -- # '[' -z 58693 ']' 00:04:08.151 21:25:53 rpc -- common/autotest_common.sh@954 -- # kill -0 58693 00:04:08.151 21:25:53 rpc -- common/autotest_common.sh@955 -- # uname 00:04:08.151 21:25:53 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:08.151 21:25:53 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58693 00:04:08.151 21:25:53 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:08.151 killing process with pid 58693 00:04:08.151 21:25:53 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:08.151 21:25:53 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58693' 00:04:08.151 21:25:53 rpc -- common/autotest_common.sh@969 -- # kill 58693 00:04:08.151 21:25:53 rpc -- common/autotest_common.sh@974 -- # wait 58693 00:04:08.729 00:04:08.729 real 0m3.000s 00:04:08.729 user 0m3.866s 00:04:08.729 sys 0m0.692s 00:04:08.729 21:25:53 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:08.729 21:25:53 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:08.729 ************************************ 00:04:08.729 END TEST rpc 00:04:08.729 ************************************ 00:04:08.729 21:25:53 -- spdk/autotest.sh@170 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:08.729 21:25:53 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:08.729 21:25:53 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:08.729 21:25:53 -- common/autotest_common.sh@10 -- # set +x 00:04:08.729 ************************************ 00:04:08.729 START TEST skip_rpc 00:04:08.729 ************************************ 00:04:08.729 21:25:53 skip_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:08.988 * Looking for test storage... 00:04:08.988 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:08.988 21:25:53 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:08.988 21:25:53 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:08.988 21:25:53 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:08.988 21:25:53 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:08.988 21:25:53 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:08.988 21:25:53 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:08.988 ************************************ 00:04:08.988 START TEST skip_rpc 00:04:08.988 ************************************ 00:04:08.988 21:25:53 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:04:08.988 21:25:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=58891 00:04:08.988 21:25:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:08.988 21:25:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:08.988 21:25:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:08.988 [2024-07-24 21:25:53.830388] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:04:08.988 [2024-07-24 21:25:53.830490] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58891 ] 00:04:08.988 [2024-07-24 21:25:53.971312] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:09.247 [2024-07-24 21:25:54.124805] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:09.247 [2024-07-24 21:25:54.197660] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:14.519 21:25:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:14.519 21:25:58 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:04:14.519 21:25:58 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:14.519 21:25:58 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:04:14.519 21:25:58 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:14.519 21:25:58 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:04:14.519 21:25:58 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:14.519 21:25:58 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:04:14.519 21:25:58 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:14.519 21:25:58 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:14.519 21:25:58 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:14.519 21:25:58 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:04:14.519 21:25:58 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:14.519 21:25:58 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:14.519 21:25:58 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:14.519 21:25:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:14.519 21:25:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 58891 00:04:14.519 21:25:58 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 58891 ']' 00:04:14.519 21:25:58 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 58891 00:04:14.519 21:25:58 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:04:14.519 21:25:58 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:14.519 21:25:58 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58891 00:04:14.519 21:25:58 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:14.519 21:25:58 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:14.519 21:25:58 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58891' 00:04:14.519 killing process with pid 58891 00:04:14.519 21:25:58 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 58891 00:04:14.519 21:25:58 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 58891 00:04:14.519 00:04:14.519 ************************************ 00:04:14.519 END TEST skip_rpc 00:04:14.519 ************************************ 00:04:14.519 real 0m5.571s 00:04:14.519 user 0m5.114s 00:04:14.519 sys 0m0.362s 00:04:14.519 21:25:59 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:14.519 21:25:59 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:14.519 21:25:59 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:14.519 21:25:59 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:14.519 21:25:59 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:14.519 21:25:59 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:14.519 ************************************ 00:04:14.519 START TEST skip_rpc_with_json 00:04:14.519 ************************************ 00:04:14.519 21:25:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:04:14.519 21:25:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:14.519 21:25:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=58983 00:04:14.519 21:25:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:14.519 21:25:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 58983 00:04:14.519 21:25:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 58983 ']' 00:04:14.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:14.519 21:25:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:14.519 21:25:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:14.519 21:25:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:14.519 21:25:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:14.519 21:25:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:14.519 21:25:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:14.519 [2024-07-24 21:25:59.458806] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:04:14.519 [2024-07-24 21:25:59.458935] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58983 ] 00:04:14.778 [2024-07-24 21:25:59.600975] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:14.778 [2024-07-24 21:25:59.714448] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:15.037 [2024-07-24 21:25:59.784771] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:15.604 21:26:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:15.604 21:26:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:04:15.604 21:26:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:15.604 21:26:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:15.605 21:26:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:15.605 [2024-07-24 21:26:00.446900] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:15.605 request: 00:04:15.605 { 00:04:15.605 "trtype": "tcp", 00:04:15.605 "method": "nvmf_get_transports", 00:04:15.605 "req_id": 1 00:04:15.605 } 00:04:15.605 Got JSON-RPC error response 00:04:15.605 response: 00:04:15.605 { 00:04:15.605 "code": -19, 00:04:15.605 "message": "No such device" 00:04:15.605 } 00:04:15.605 21:26:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:15.605 21:26:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:15.605 21:26:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:15.605 21:26:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:15.605 [2024-07-24 21:26:00.459007] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:15.605 21:26:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:15.605 21:26:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:15.605 21:26:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:15.605 21:26:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:15.864 21:26:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:15.864 21:26:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:15.864 { 00:04:15.864 "subsystems": [ 00:04:15.864 { 00:04:15.864 "subsystem": "keyring", 00:04:15.864 "config": [] 00:04:15.864 }, 00:04:15.864 { 00:04:15.864 "subsystem": "iobuf", 00:04:15.864 "config": [ 00:04:15.864 { 00:04:15.864 "method": "iobuf_set_options", 00:04:15.864 "params": { 00:04:15.864 "small_pool_count": 8192, 00:04:15.864 "large_pool_count": 1024, 00:04:15.864 "small_bufsize": 8192, 00:04:15.864 "large_bufsize": 135168 00:04:15.864 } 00:04:15.864 } 00:04:15.864 ] 00:04:15.864 }, 00:04:15.864 { 00:04:15.864 "subsystem": "sock", 00:04:15.864 "config": [ 00:04:15.864 { 00:04:15.864 "method": "sock_set_default_impl", 00:04:15.864 "params": { 00:04:15.864 "impl_name": "uring" 00:04:15.864 } 00:04:15.864 }, 00:04:15.864 { 00:04:15.864 "method": "sock_impl_set_options", 00:04:15.864 "params": { 00:04:15.864 "impl_name": "ssl", 00:04:15.864 "recv_buf_size": 4096, 00:04:15.864 "send_buf_size": 4096, 00:04:15.864 "enable_recv_pipe": true, 00:04:15.864 "enable_quickack": false, 00:04:15.864 "enable_placement_id": 0, 00:04:15.864 "enable_zerocopy_send_server": true, 00:04:15.864 "enable_zerocopy_send_client": false, 00:04:15.864 "zerocopy_threshold": 0, 00:04:15.864 "tls_version": 0, 00:04:15.864 "enable_ktls": false 00:04:15.864 } 00:04:15.864 }, 00:04:15.864 { 00:04:15.864 "method": "sock_impl_set_options", 00:04:15.864 "params": { 00:04:15.864 "impl_name": "posix", 00:04:15.864 "recv_buf_size": 2097152, 00:04:15.864 "send_buf_size": 2097152, 00:04:15.864 "enable_recv_pipe": true, 00:04:15.864 "enable_quickack": false, 00:04:15.864 "enable_placement_id": 0, 00:04:15.864 "enable_zerocopy_send_server": true, 00:04:15.864 "enable_zerocopy_send_client": false, 00:04:15.864 "zerocopy_threshold": 0, 00:04:15.864 "tls_version": 0, 00:04:15.864 "enable_ktls": false 00:04:15.864 } 00:04:15.864 }, 00:04:15.864 { 00:04:15.864 "method": "sock_impl_set_options", 00:04:15.864 "params": { 00:04:15.864 "impl_name": "uring", 00:04:15.864 "recv_buf_size": 2097152, 00:04:15.864 "send_buf_size": 2097152, 00:04:15.864 "enable_recv_pipe": true, 00:04:15.864 "enable_quickack": false, 00:04:15.864 "enable_placement_id": 0, 00:04:15.864 "enable_zerocopy_send_server": false, 00:04:15.864 "enable_zerocopy_send_client": false, 00:04:15.864 "zerocopy_threshold": 0, 00:04:15.864 "tls_version": 0, 00:04:15.864 "enable_ktls": false 00:04:15.864 } 00:04:15.864 } 00:04:15.864 ] 00:04:15.864 }, 00:04:15.864 { 00:04:15.864 "subsystem": "vmd", 00:04:15.864 "config": [] 00:04:15.864 }, 00:04:15.864 { 00:04:15.864 "subsystem": "accel", 00:04:15.864 "config": [ 00:04:15.864 { 00:04:15.864 "method": "accel_set_options", 00:04:15.864 "params": { 00:04:15.864 "small_cache_size": 128, 00:04:15.864 "large_cache_size": 16, 00:04:15.864 "task_count": 2048, 00:04:15.864 "sequence_count": 2048, 00:04:15.864 "buf_count": 2048 00:04:15.864 } 00:04:15.864 } 00:04:15.864 ] 00:04:15.864 }, 00:04:15.864 { 00:04:15.864 "subsystem": "bdev", 00:04:15.864 "config": [ 00:04:15.864 { 00:04:15.864 "method": "bdev_set_options", 00:04:15.864 "params": { 00:04:15.864 "bdev_io_pool_size": 65535, 00:04:15.864 "bdev_io_cache_size": 256, 00:04:15.864 "bdev_auto_examine": true, 00:04:15.864 "iobuf_small_cache_size": 128, 00:04:15.864 "iobuf_large_cache_size": 16 00:04:15.864 } 00:04:15.864 }, 00:04:15.864 { 00:04:15.864 "method": "bdev_raid_set_options", 00:04:15.864 "params": { 00:04:15.864 "process_window_size_kb": 1024, 00:04:15.864 "process_max_bandwidth_mb_sec": 0 00:04:15.864 } 00:04:15.864 }, 00:04:15.864 { 00:04:15.864 "method": "bdev_iscsi_set_options", 00:04:15.864 "params": { 00:04:15.864 "timeout_sec": 30 00:04:15.864 } 00:04:15.864 }, 00:04:15.864 { 00:04:15.864 "method": "bdev_nvme_set_options", 00:04:15.864 "params": { 00:04:15.864 "action_on_timeout": "none", 00:04:15.864 "timeout_us": 0, 00:04:15.864 "timeout_admin_us": 0, 00:04:15.864 "keep_alive_timeout_ms": 10000, 00:04:15.864 "arbitration_burst": 0, 00:04:15.864 "low_priority_weight": 0, 00:04:15.864 "medium_priority_weight": 0, 00:04:15.864 "high_priority_weight": 0, 00:04:15.864 "nvme_adminq_poll_period_us": 10000, 00:04:15.864 "nvme_ioq_poll_period_us": 0, 00:04:15.864 "io_queue_requests": 0, 00:04:15.864 "delay_cmd_submit": true, 00:04:15.864 "transport_retry_count": 4, 00:04:15.864 "bdev_retry_count": 3, 00:04:15.864 "transport_ack_timeout": 0, 00:04:15.864 "ctrlr_loss_timeout_sec": 0, 00:04:15.864 "reconnect_delay_sec": 0, 00:04:15.864 "fast_io_fail_timeout_sec": 0, 00:04:15.864 "disable_auto_failback": false, 00:04:15.864 "generate_uuids": false, 00:04:15.864 "transport_tos": 0, 00:04:15.864 "nvme_error_stat": false, 00:04:15.864 "rdma_srq_size": 0, 00:04:15.864 "io_path_stat": false, 00:04:15.864 "allow_accel_sequence": false, 00:04:15.864 "rdma_max_cq_size": 0, 00:04:15.864 "rdma_cm_event_timeout_ms": 0, 00:04:15.864 "dhchap_digests": [ 00:04:15.864 "sha256", 00:04:15.864 "sha384", 00:04:15.864 "sha512" 00:04:15.864 ], 00:04:15.864 "dhchap_dhgroups": [ 00:04:15.864 "null", 00:04:15.864 "ffdhe2048", 00:04:15.864 "ffdhe3072", 00:04:15.864 "ffdhe4096", 00:04:15.864 "ffdhe6144", 00:04:15.864 "ffdhe8192" 00:04:15.864 ] 00:04:15.864 } 00:04:15.864 }, 00:04:15.864 { 00:04:15.864 "method": "bdev_nvme_set_hotplug", 00:04:15.864 "params": { 00:04:15.864 "period_us": 100000, 00:04:15.864 "enable": false 00:04:15.864 } 00:04:15.864 }, 00:04:15.864 { 00:04:15.864 "method": "bdev_wait_for_examine" 00:04:15.864 } 00:04:15.864 ] 00:04:15.864 }, 00:04:15.864 { 00:04:15.864 "subsystem": "scsi", 00:04:15.864 "config": null 00:04:15.864 }, 00:04:15.864 { 00:04:15.864 "subsystem": "scheduler", 00:04:15.864 "config": [ 00:04:15.864 { 00:04:15.864 "method": "framework_set_scheduler", 00:04:15.864 "params": { 00:04:15.864 "name": "static" 00:04:15.864 } 00:04:15.864 } 00:04:15.864 ] 00:04:15.864 }, 00:04:15.864 { 00:04:15.864 "subsystem": "vhost_scsi", 00:04:15.864 "config": [] 00:04:15.864 }, 00:04:15.864 { 00:04:15.864 "subsystem": "vhost_blk", 00:04:15.864 "config": [] 00:04:15.864 }, 00:04:15.864 { 00:04:15.864 "subsystem": "ublk", 00:04:15.864 "config": [] 00:04:15.864 }, 00:04:15.864 { 00:04:15.864 "subsystem": "nbd", 00:04:15.864 "config": [] 00:04:15.864 }, 00:04:15.864 { 00:04:15.864 "subsystem": "nvmf", 00:04:15.865 "config": [ 00:04:15.865 { 00:04:15.865 "method": "nvmf_set_config", 00:04:15.865 "params": { 00:04:15.865 "discovery_filter": "match_any", 00:04:15.865 "admin_cmd_passthru": { 00:04:15.865 "identify_ctrlr": false 00:04:15.865 } 00:04:15.865 } 00:04:15.865 }, 00:04:15.865 { 00:04:15.865 "method": "nvmf_set_max_subsystems", 00:04:15.865 "params": { 00:04:15.865 "max_subsystems": 1024 00:04:15.865 } 00:04:15.865 }, 00:04:15.865 { 00:04:15.865 "method": "nvmf_set_crdt", 00:04:15.865 "params": { 00:04:15.865 "crdt1": 0, 00:04:15.865 "crdt2": 0, 00:04:15.865 "crdt3": 0 00:04:15.865 } 00:04:15.865 }, 00:04:15.865 { 00:04:15.865 "method": "nvmf_create_transport", 00:04:15.865 "params": { 00:04:15.865 "trtype": "TCP", 00:04:15.865 "max_queue_depth": 128, 00:04:15.865 "max_io_qpairs_per_ctrlr": 127, 00:04:15.865 "in_capsule_data_size": 4096, 00:04:15.865 "max_io_size": 131072, 00:04:15.865 "io_unit_size": 131072, 00:04:15.865 "max_aq_depth": 128, 00:04:15.865 "num_shared_buffers": 511, 00:04:15.865 "buf_cache_size": 4294967295, 00:04:15.865 "dif_insert_or_strip": false, 00:04:15.865 "zcopy": false, 00:04:15.865 "c2h_success": true, 00:04:15.865 "sock_priority": 0, 00:04:15.865 "abort_timeout_sec": 1, 00:04:15.865 "ack_timeout": 0, 00:04:15.865 "data_wr_pool_size": 0 00:04:15.865 } 00:04:15.865 } 00:04:15.865 ] 00:04:15.865 }, 00:04:15.865 { 00:04:15.865 "subsystem": "iscsi", 00:04:15.865 "config": [ 00:04:15.865 { 00:04:15.865 "method": "iscsi_set_options", 00:04:15.865 "params": { 00:04:15.865 "node_base": "iqn.2016-06.io.spdk", 00:04:15.865 "max_sessions": 128, 00:04:15.865 "max_connections_per_session": 2, 00:04:15.865 "max_queue_depth": 64, 00:04:15.865 "default_time2wait": 2, 00:04:15.865 "default_time2retain": 20, 00:04:15.865 "first_burst_length": 8192, 00:04:15.865 "immediate_data": true, 00:04:15.865 "allow_duplicated_isid": false, 00:04:15.865 "error_recovery_level": 0, 00:04:15.865 "nop_timeout": 60, 00:04:15.865 "nop_in_interval": 30, 00:04:15.865 "disable_chap": false, 00:04:15.865 "require_chap": false, 00:04:15.865 "mutual_chap": false, 00:04:15.865 "chap_group": 0, 00:04:15.865 "max_large_datain_per_connection": 64, 00:04:15.865 "max_r2t_per_connection": 4, 00:04:15.865 "pdu_pool_size": 36864, 00:04:15.865 "immediate_data_pool_size": 16384, 00:04:15.865 "data_out_pool_size": 2048 00:04:15.865 } 00:04:15.865 } 00:04:15.865 ] 00:04:15.865 } 00:04:15.865 ] 00:04:15.865 } 00:04:15.865 21:26:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:15.865 21:26:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 58983 00:04:15.865 21:26:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 58983 ']' 00:04:15.865 21:26:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 58983 00:04:15.865 21:26:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:04:15.865 21:26:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:15.865 21:26:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58983 00:04:15.865 killing process with pid 58983 00:04:15.865 21:26:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:15.865 21:26:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:15.865 21:26:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58983' 00:04:15.865 21:26:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 58983 00:04:15.865 21:26:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 58983 00:04:16.432 21:26:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:16.432 21:26:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=59010 00:04:16.432 21:26:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:21.699 21:26:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 59010 00:04:21.699 21:26:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 59010 ']' 00:04:21.699 21:26:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 59010 00:04:21.699 21:26:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:04:21.699 21:26:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:21.699 21:26:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59010 00:04:21.699 killing process with pid 59010 00:04:21.699 21:26:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:21.699 21:26:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:21.700 21:26:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59010' 00:04:21.700 21:26:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 59010 00:04:21.700 21:26:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 59010 00:04:21.959 21:26:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:21.959 21:26:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:21.959 ************************************ 00:04:21.959 END TEST skip_rpc_with_json 00:04:21.959 ************************************ 00:04:21.959 00:04:21.959 real 0m7.332s 00:04:21.959 user 0m6.924s 00:04:21.959 sys 0m0.805s 00:04:21.959 21:26:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:21.959 21:26:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:21.959 21:26:06 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:21.959 21:26:06 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:21.959 21:26:06 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:21.959 21:26:06 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:21.959 ************************************ 00:04:21.959 START TEST skip_rpc_with_delay 00:04:21.959 ************************************ 00:04:21.959 21:26:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:04:21.959 21:26:06 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:21.959 21:26:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:04:21.959 21:26:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:21.959 21:26:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:21.959 21:26:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:21.959 21:26:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:21.959 21:26:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:21.959 21:26:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:21.959 21:26:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:21.959 21:26:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:21.959 21:26:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:21.959 21:26:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:21.959 [2024-07-24 21:26:06.824167] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:21.959 [2024-07-24 21:26:06.824308] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:21.959 21:26:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:04:21.959 21:26:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:21.959 21:26:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:21.959 21:26:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:21.959 00:04:21.959 real 0m0.087s 00:04:21.959 user 0m0.052s 00:04:21.959 sys 0m0.033s 00:04:21.959 ************************************ 00:04:21.959 END TEST skip_rpc_with_delay 00:04:21.959 ************************************ 00:04:21.959 21:26:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:21.959 21:26:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:21.959 21:26:06 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:21.959 21:26:06 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:21.959 21:26:06 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:21.959 21:26:06 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:21.959 21:26:06 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:21.959 21:26:06 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:21.959 ************************************ 00:04:21.959 START TEST exit_on_failed_rpc_init 00:04:21.959 ************************************ 00:04:21.959 21:26:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:04:21.959 21:26:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=59120 00:04:21.959 21:26:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:21.959 21:26:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 59120 00:04:21.959 21:26:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 59120 ']' 00:04:21.959 21:26:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:21.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:21.959 21:26:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:21.959 21:26:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:21.959 21:26:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:21.959 21:26:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:22.218 [2024-07-24 21:26:06.964390] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:04:22.218 [2024-07-24 21:26:06.964988] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59120 ] 00:04:22.218 [2024-07-24 21:26:07.100527] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:22.218 [2024-07-24 21:26:07.194073] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:22.477 [2024-07-24 21:26:07.264036] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:23.043 21:26:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:23.043 21:26:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:04:23.043 21:26:07 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:23.043 21:26:07 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:23.043 21:26:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:04:23.043 21:26:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:23.043 21:26:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:23.043 21:26:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:23.043 21:26:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:23.043 21:26:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:23.043 21:26:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:23.043 21:26:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:23.043 21:26:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:23.043 21:26:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:23.044 21:26:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:23.301 [2024-07-24 21:26:08.041004] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:04:23.301 [2024-07-24 21:26:08.041098] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59138 ] 00:04:23.301 [2024-07-24 21:26:08.171745] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:23.560 [2024-07-24 21:26:08.325890] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:23.560 [2024-07-24 21:26:08.326395] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:23.560 [2024-07-24 21:26:08.326736] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:23.560 [2024-07-24 21:26:08.326988] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:23.560 21:26:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:04:23.560 21:26:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:23.560 21:26:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:04:23.560 21:26:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:04:23.560 21:26:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:04:23.560 21:26:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:23.560 21:26:08 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:23.560 21:26:08 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 59120 00:04:23.560 21:26:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 59120 ']' 00:04:23.560 21:26:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 59120 00:04:23.560 21:26:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:04:23.560 21:26:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:23.560 21:26:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59120 00:04:23.560 21:26:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:23.560 killing process with pid 59120 00:04:23.560 21:26:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:23.560 21:26:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59120' 00:04:23.560 21:26:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 59120 00:04:23.560 21:26:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 59120 00:04:24.126 ************************************ 00:04:24.126 END TEST exit_on_failed_rpc_init 00:04:24.126 ************************************ 00:04:24.126 00:04:24.126 real 0m2.080s 00:04:24.126 user 0m2.415s 00:04:24.126 sys 0m0.487s 00:04:24.126 21:26:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:24.126 21:26:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:24.126 21:26:09 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:24.126 ************************************ 00:04:24.126 END TEST skip_rpc 00:04:24.126 ************************************ 00:04:24.126 00:04:24.126 real 0m15.349s 00:04:24.126 user 0m14.604s 00:04:24.126 sys 0m1.851s 00:04:24.126 21:26:09 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:24.126 21:26:09 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:24.126 21:26:09 -- spdk/autotest.sh@171 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:24.126 21:26:09 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:24.126 21:26:09 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:24.126 21:26:09 -- common/autotest_common.sh@10 -- # set +x 00:04:24.126 ************************************ 00:04:24.126 START TEST rpc_client 00:04:24.126 ************************************ 00:04:24.126 21:26:09 rpc_client -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:24.385 * Looking for test storage... 00:04:24.385 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:24.385 21:26:09 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:24.385 OK 00:04:24.385 21:26:09 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:24.385 ************************************ 00:04:24.385 END TEST rpc_client 00:04:24.385 ************************************ 00:04:24.385 00:04:24.385 real 0m0.094s 00:04:24.385 user 0m0.048s 00:04:24.385 sys 0m0.052s 00:04:24.385 21:26:09 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:24.385 21:26:09 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:24.385 21:26:09 -- spdk/autotest.sh@172 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:24.385 21:26:09 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:24.385 21:26:09 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:24.385 21:26:09 -- common/autotest_common.sh@10 -- # set +x 00:04:24.385 ************************************ 00:04:24.385 START TEST json_config 00:04:24.385 ************************************ 00:04:24.385 21:26:09 json_config -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:24.385 21:26:09 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:24.385 21:26:09 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:24.385 21:26:09 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:24.385 21:26:09 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:24.385 21:26:09 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:24.385 21:26:09 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:24.386 21:26:09 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:24.386 21:26:09 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:24.386 21:26:09 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:24.386 21:26:09 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:24.386 21:26:09 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:24.386 21:26:09 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:24.386 21:26:09 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:04:24.386 21:26:09 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:04:24.386 21:26:09 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:24.386 21:26:09 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:24.386 21:26:09 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:24.386 21:26:09 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:24.386 21:26:09 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:24.386 21:26:09 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:24.386 21:26:09 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:24.386 21:26:09 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:24.386 21:26:09 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:24.386 21:26:09 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:24.386 21:26:09 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:24.386 21:26:09 json_config -- paths/export.sh@5 -- # export PATH 00:04:24.386 21:26:09 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:24.386 21:26:09 json_config -- nvmf/common.sh@47 -- # : 0 00:04:24.386 21:26:09 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:24.386 21:26:09 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:24.386 21:26:09 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:24.386 21:26:09 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:24.386 21:26:09 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:24.386 21:26:09 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:24.386 21:26:09 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:24.386 21:26:09 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:24.386 21:26:09 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:24.386 21:26:09 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:24.386 21:26:09 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:24.386 21:26:09 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:24.386 21:26:09 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:24.386 21:26:09 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:24.386 21:26:09 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:24.386 21:26:09 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:24.386 21:26:09 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:24.386 21:26:09 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:24.386 21:26:09 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:24.386 21:26:09 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:04:24.386 21:26:09 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:24.386 21:26:09 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:24.386 21:26:09 json_config -- json_config/json_config.sh@359 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:24.386 21:26:09 json_config -- json_config/json_config.sh@360 -- # echo 'INFO: JSON configuration test init' 00:04:24.386 INFO: JSON configuration test init 00:04:24.386 21:26:09 json_config -- json_config/json_config.sh@361 -- # json_config_test_init 00:04:24.386 21:26:09 json_config -- json_config/json_config.sh@266 -- # timing_enter json_config_test_init 00:04:24.386 21:26:09 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:24.386 21:26:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:24.386 21:26:09 json_config -- json_config/json_config.sh@267 -- # timing_enter json_config_setup_target 00:04:24.386 21:26:09 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:24.386 21:26:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:24.386 21:26:09 json_config -- json_config/json_config.sh@269 -- # json_config_test_start_app target --wait-for-rpc 00:04:24.386 21:26:09 json_config -- json_config/common.sh@9 -- # local app=target 00:04:24.386 21:26:09 json_config -- json_config/common.sh@10 -- # shift 00:04:24.386 21:26:09 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:24.386 21:26:09 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:24.386 21:26:09 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:24.386 21:26:09 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:24.386 21:26:09 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:24.386 21:26:09 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=59256 00:04:24.386 21:26:09 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:24.386 Waiting for target to run... 00:04:24.386 21:26:09 json_config -- json_config/common.sh@25 -- # waitforlisten 59256 /var/tmp/spdk_tgt.sock 00:04:24.386 21:26:09 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:24.386 21:26:09 json_config -- common/autotest_common.sh@831 -- # '[' -z 59256 ']' 00:04:24.386 21:26:09 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:24.386 21:26:09 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:24.386 21:26:09 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:24.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:24.386 21:26:09 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:24.386 21:26:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:24.386 [2024-07-24 21:26:09.359230] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:04:24.386 [2024-07-24 21:26:09.359337] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59256 ] 00:04:24.953 [2024-07-24 21:26:09.778289] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:24.953 [2024-07-24 21:26:09.883836] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:25.518 00:04:25.518 21:26:10 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:25.518 21:26:10 json_config -- common/autotest_common.sh@864 -- # return 0 00:04:25.518 21:26:10 json_config -- json_config/common.sh@26 -- # echo '' 00:04:25.518 21:26:10 json_config -- json_config/json_config.sh@273 -- # create_accel_config 00:04:25.518 21:26:10 json_config -- json_config/json_config.sh@97 -- # timing_enter create_accel_config 00:04:25.518 21:26:10 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:25.518 21:26:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:25.518 21:26:10 json_config -- json_config/json_config.sh@99 -- # [[ 0 -eq 1 ]] 00:04:25.518 21:26:10 json_config -- json_config/json_config.sh@105 -- # timing_exit create_accel_config 00:04:25.518 21:26:10 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:25.518 21:26:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:25.518 21:26:10 json_config -- json_config/json_config.sh@277 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:25.518 21:26:10 json_config -- json_config/json_config.sh@278 -- # tgt_rpc load_config 00:04:25.518 21:26:10 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:25.777 [2024-07-24 21:26:10.559656] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:25.777 21:26:10 json_config -- json_config/json_config.sh@280 -- # tgt_check_notification_types 00:04:25.777 21:26:10 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:25.777 21:26:10 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:25.777 21:26:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:25.777 21:26:10 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:25.777 21:26:10 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:25.777 21:26:10 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:25.777 21:26:10 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:04:25.777 21:26:10 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:04:25.777 21:26:10 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:26.343 21:26:11 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:04:26.343 21:26:11 json_config -- json_config/json_config.sh@48 -- # local get_types 00:04:26.343 21:26:11 json_config -- json_config/json_config.sh@50 -- # local type_diff 00:04:26.343 21:26:11 json_config -- json_config/json_config.sh@51 -- # tr ' ' '\n' 00:04:26.343 21:26:11 json_config -- json_config/json_config.sh@51 -- # echo bdev_register bdev_unregister bdev_register bdev_unregister 00:04:26.343 21:26:11 json_config -- json_config/json_config.sh@51 -- # sort 00:04:26.343 21:26:11 json_config -- json_config/json_config.sh@51 -- # uniq -u 00:04:26.343 21:26:11 json_config -- json_config/json_config.sh@51 -- # type_diff= 00:04:26.343 21:26:11 json_config -- json_config/json_config.sh@53 -- # [[ -n '' ]] 00:04:26.343 21:26:11 json_config -- json_config/json_config.sh@58 -- # timing_exit tgt_check_notification_types 00:04:26.343 21:26:11 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:26.343 21:26:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:26.343 21:26:11 json_config -- json_config/json_config.sh@59 -- # return 0 00:04:26.343 21:26:11 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:04:26.343 21:26:11 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:04:26.343 21:26:11 json_config -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:04:26.343 21:26:11 json_config -- json_config/json_config.sh@294 -- # [[ 1 -eq 1 ]] 00:04:26.343 21:26:11 json_config -- json_config/json_config.sh@295 -- # create_nvmf_subsystem_config 00:04:26.343 21:26:11 json_config -- json_config/json_config.sh@234 -- # timing_enter create_nvmf_subsystem_config 00:04:26.343 21:26:11 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:26.343 21:26:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:26.343 21:26:11 json_config -- json_config/json_config.sh@236 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:26.343 21:26:11 json_config -- json_config/json_config.sh@237 -- # [[ tcp == \r\d\m\a ]] 00:04:26.343 21:26:11 json_config -- json_config/json_config.sh@241 -- # [[ -z 127.0.0.1 ]] 00:04:26.343 21:26:11 json_config -- json_config/json_config.sh@246 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:26.343 21:26:11 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:26.601 MallocForNvmf0 00:04:26.601 21:26:11 json_config -- json_config/json_config.sh@247 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:26.601 21:26:11 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:26.858 MallocForNvmf1 00:04:26.858 21:26:11 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:26.858 21:26:11 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:26.858 [2024-07-24 21:26:11.839903] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:27.116 21:26:11 json_config -- json_config/json_config.sh@250 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:27.116 21:26:11 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:27.116 21:26:12 json_config -- json_config/json_config.sh@251 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:27.116 21:26:12 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:27.375 21:26:12 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:27.375 21:26:12 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:27.633 21:26:12 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:27.633 21:26:12 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:27.891 [2024-07-24 21:26:12.820405] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:27.891 21:26:12 json_config -- json_config/json_config.sh@255 -- # timing_exit create_nvmf_subsystem_config 00:04:27.891 21:26:12 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:27.891 21:26:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:27.891 21:26:12 json_config -- json_config/json_config.sh@297 -- # timing_exit json_config_setup_target 00:04:27.891 21:26:12 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:27.891 21:26:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:28.149 21:26:12 json_config -- json_config/json_config.sh@299 -- # [[ 0 -eq 1 ]] 00:04:28.149 21:26:12 json_config -- json_config/json_config.sh@304 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:28.149 21:26:12 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:28.414 MallocBdevForConfigChangeCheck 00:04:28.414 21:26:13 json_config -- json_config/json_config.sh@306 -- # timing_exit json_config_test_init 00:04:28.414 21:26:13 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:28.414 21:26:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:28.414 21:26:13 json_config -- json_config/json_config.sh@363 -- # tgt_rpc save_config 00:04:28.414 21:26:13 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:28.685 INFO: shutting down applications... 00:04:28.685 21:26:13 json_config -- json_config/json_config.sh@365 -- # echo 'INFO: shutting down applications...' 00:04:28.685 21:26:13 json_config -- json_config/json_config.sh@366 -- # [[ 0 -eq 1 ]] 00:04:28.685 21:26:13 json_config -- json_config/json_config.sh@372 -- # json_config_clear target 00:04:28.685 21:26:13 json_config -- json_config/json_config.sh@336 -- # [[ -n 22 ]] 00:04:28.685 21:26:13 json_config -- json_config/json_config.sh@337 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:28.942 Calling clear_iscsi_subsystem 00:04:28.942 Calling clear_nvmf_subsystem 00:04:28.942 Calling clear_nbd_subsystem 00:04:28.942 Calling clear_ublk_subsystem 00:04:28.942 Calling clear_vhost_blk_subsystem 00:04:28.942 Calling clear_vhost_scsi_subsystem 00:04:28.942 Calling clear_bdev_subsystem 00:04:28.942 21:26:13 json_config -- json_config/json_config.sh@341 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:04:28.942 21:26:13 json_config -- json_config/json_config.sh@347 -- # count=100 00:04:28.942 21:26:13 json_config -- json_config/json_config.sh@348 -- # '[' 100 -gt 0 ']' 00:04:28.942 21:26:13 json_config -- json_config/json_config.sh@349 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:28.942 21:26:13 json_config -- json_config/json_config.sh@349 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:04:28.942 21:26:13 json_config -- json_config/json_config.sh@349 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:29.507 21:26:14 json_config -- json_config/json_config.sh@349 -- # break 00:04:29.507 21:26:14 json_config -- json_config/json_config.sh@354 -- # '[' 100 -eq 0 ']' 00:04:29.507 21:26:14 json_config -- json_config/json_config.sh@373 -- # json_config_test_shutdown_app target 00:04:29.507 21:26:14 json_config -- json_config/common.sh@31 -- # local app=target 00:04:29.507 21:26:14 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:29.507 21:26:14 json_config -- json_config/common.sh@35 -- # [[ -n 59256 ]] 00:04:29.507 21:26:14 json_config -- json_config/common.sh@38 -- # kill -SIGINT 59256 00:04:29.507 21:26:14 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:29.507 21:26:14 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:29.507 21:26:14 json_config -- json_config/common.sh@41 -- # kill -0 59256 00:04:29.507 21:26:14 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:30.072 21:26:14 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:30.072 21:26:14 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:30.072 21:26:14 json_config -- json_config/common.sh@41 -- # kill -0 59256 00:04:30.072 21:26:14 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:30.072 21:26:14 json_config -- json_config/common.sh@43 -- # break 00:04:30.072 21:26:14 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:30.072 SPDK target shutdown done 00:04:30.072 21:26:14 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:30.072 INFO: relaunching applications... 00:04:30.072 21:26:14 json_config -- json_config/json_config.sh@375 -- # echo 'INFO: relaunching applications...' 00:04:30.072 21:26:14 json_config -- json_config/json_config.sh@376 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:30.072 21:26:14 json_config -- json_config/common.sh@9 -- # local app=target 00:04:30.072 21:26:14 json_config -- json_config/common.sh@10 -- # shift 00:04:30.072 21:26:14 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:30.073 21:26:14 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:30.073 21:26:14 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:30.073 21:26:14 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:30.073 21:26:14 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:30.073 21:26:14 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=59457 00:04:30.073 21:26:14 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:30.073 Waiting for target to run... 00:04:30.073 21:26:14 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:30.073 21:26:14 json_config -- json_config/common.sh@25 -- # waitforlisten 59457 /var/tmp/spdk_tgt.sock 00:04:30.073 21:26:14 json_config -- common/autotest_common.sh@831 -- # '[' -z 59457 ']' 00:04:30.073 21:26:14 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:30.073 21:26:14 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:30.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:30.073 21:26:14 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:30.073 21:26:14 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:30.073 21:26:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:30.073 [2024-07-24 21:26:14.864036] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:04:30.073 [2024-07-24 21:26:14.864160] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59457 ] 00:04:30.331 [2024-07-24 21:26:15.281419] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:30.589 [2024-07-24 21:26:15.385758] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:30.589 [2024-07-24 21:26:15.511560] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:30.847 [2024-07-24 21:26:15.725032] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:30.847 [2024-07-24 21:26:15.757106] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:30.847 21:26:15 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:30.847 00:04:30.847 21:26:15 json_config -- common/autotest_common.sh@864 -- # return 0 00:04:30.847 21:26:15 json_config -- json_config/common.sh@26 -- # echo '' 00:04:30.847 21:26:15 json_config -- json_config/json_config.sh@377 -- # [[ 0 -eq 1 ]] 00:04:30.847 INFO: Checking if target configuration is the same... 00:04:30.847 21:26:15 json_config -- json_config/json_config.sh@381 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:30.847 21:26:15 json_config -- json_config/json_config.sh@382 -- # tgt_rpc save_config 00:04:30.847 21:26:15 json_config -- json_config/json_config.sh@382 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:30.847 21:26:15 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:30.847 + '[' 2 -ne 2 ']' 00:04:30.847 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:04:30.847 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:04:30.847 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:30.847 +++ basename /dev/fd/62 00:04:30.847 ++ mktemp /tmp/62.XXX 00:04:30.847 + tmp_file_1=/tmp/62.Qxj 00:04:31.105 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:31.105 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:31.105 + tmp_file_2=/tmp/spdk_tgt_config.json.v3x 00:04:31.105 + ret=0 00:04:31.105 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:31.363 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:31.363 + diff -u /tmp/62.Qxj /tmp/spdk_tgt_config.json.v3x 00:04:31.363 INFO: JSON config files are the same 00:04:31.363 + echo 'INFO: JSON config files are the same' 00:04:31.363 + rm /tmp/62.Qxj /tmp/spdk_tgt_config.json.v3x 00:04:31.363 + exit 0 00:04:31.363 21:26:16 json_config -- json_config/json_config.sh@383 -- # [[ 0 -eq 1 ]] 00:04:31.363 INFO: changing configuration and checking if this can be detected... 00:04:31.363 21:26:16 json_config -- json_config/json_config.sh@388 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:31.363 21:26:16 json_config -- json_config/json_config.sh@390 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:31.363 21:26:16 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:31.621 21:26:16 json_config -- json_config/json_config.sh@391 -- # tgt_rpc save_config 00:04:31.621 21:26:16 json_config -- json_config/json_config.sh@391 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:31.621 21:26:16 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:31.621 + '[' 2 -ne 2 ']' 00:04:31.621 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:04:31.621 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:04:31.621 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:31.621 +++ basename /dev/fd/62 00:04:31.621 ++ mktemp /tmp/62.XXX 00:04:31.621 + tmp_file_1=/tmp/62.Stt 00:04:31.621 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:31.621 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:31.621 + tmp_file_2=/tmp/spdk_tgt_config.json.y26 00:04:31.621 + ret=0 00:04:31.621 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:32.186 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:32.186 + diff -u /tmp/62.Stt /tmp/spdk_tgt_config.json.y26 00:04:32.186 + ret=1 00:04:32.186 + echo '=== Start of file: /tmp/62.Stt ===' 00:04:32.186 + cat /tmp/62.Stt 00:04:32.186 + echo '=== End of file: /tmp/62.Stt ===' 00:04:32.186 + echo '' 00:04:32.186 + echo '=== Start of file: /tmp/spdk_tgt_config.json.y26 ===' 00:04:32.186 + cat /tmp/spdk_tgt_config.json.y26 00:04:32.186 + echo '=== End of file: /tmp/spdk_tgt_config.json.y26 ===' 00:04:32.186 + echo '' 00:04:32.186 + rm /tmp/62.Stt /tmp/spdk_tgt_config.json.y26 00:04:32.186 + exit 1 00:04:32.186 INFO: configuration change detected. 00:04:32.186 21:26:17 json_config -- json_config/json_config.sh@395 -- # echo 'INFO: configuration change detected.' 00:04:32.186 21:26:17 json_config -- json_config/json_config.sh@398 -- # json_config_test_fini 00:04:32.186 21:26:17 json_config -- json_config/json_config.sh@310 -- # timing_enter json_config_test_fini 00:04:32.186 21:26:17 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:32.186 21:26:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:32.186 21:26:17 json_config -- json_config/json_config.sh@311 -- # local ret=0 00:04:32.186 21:26:17 json_config -- json_config/json_config.sh@313 -- # [[ -n '' ]] 00:04:32.186 21:26:17 json_config -- json_config/json_config.sh@321 -- # [[ -n 59457 ]] 00:04:32.186 21:26:17 json_config -- json_config/json_config.sh@324 -- # cleanup_bdev_subsystem_config 00:04:32.186 21:26:17 json_config -- json_config/json_config.sh@188 -- # timing_enter cleanup_bdev_subsystem_config 00:04:32.186 21:26:17 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:32.186 21:26:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:32.186 21:26:17 json_config -- json_config/json_config.sh@190 -- # [[ 0 -eq 1 ]] 00:04:32.186 21:26:17 json_config -- json_config/json_config.sh@197 -- # uname -s 00:04:32.186 21:26:17 json_config -- json_config/json_config.sh@197 -- # [[ Linux = Linux ]] 00:04:32.186 21:26:17 json_config -- json_config/json_config.sh@198 -- # rm -f /sample_aio 00:04:32.186 21:26:17 json_config -- json_config/json_config.sh@201 -- # [[ 0 -eq 1 ]] 00:04:32.186 21:26:17 json_config -- json_config/json_config.sh@205 -- # timing_exit cleanup_bdev_subsystem_config 00:04:32.186 21:26:17 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:32.186 21:26:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:32.186 21:26:17 json_config -- json_config/json_config.sh@327 -- # killprocess 59457 00:04:32.186 21:26:17 json_config -- common/autotest_common.sh@950 -- # '[' -z 59457 ']' 00:04:32.186 21:26:17 json_config -- common/autotest_common.sh@954 -- # kill -0 59457 00:04:32.186 21:26:17 json_config -- common/autotest_common.sh@955 -- # uname 00:04:32.186 21:26:17 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:32.186 21:26:17 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59457 00:04:32.186 21:26:17 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:32.186 killing process with pid 59457 00:04:32.186 21:26:17 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:32.186 21:26:17 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59457' 00:04:32.186 21:26:17 json_config -- common/autotest_common.sh@969 -- # kill 59457 00:04:32.186 21:26:17 json_config -- common/autotest_common.sh@974 -- # wait 59457 00:04:32.444 21:26:17 json_config -- json_config/json_config.sh@330 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:32.444 21:26:17 json_config -- json_config/json_config.sh@331 -- # timing_exit json_config_test_fini 00:04:32.444 21:26:17 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:32.444 21:26:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:32.703 21:26:17 json_config -- json_config/json_config.sh@332 -- # return 0 00:04:32.703 INFO: Success 00:04:32.703 21:26:17 json_config -- json_config/json_config.sh@400 -- # echo 'INFO: Success' 00:04:32.703 ************************************ 00:04:32.703 END TEST json_config 00:04:32.703 ************************************ 00:04:32.703 00:04:32.703 real 0m8.267s 00:04:32.703 user 0m11.731s 00:04:32.703 sys 0m1.763s 00:04:32.703 21:26:17 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:32.703 21:26:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:32.703 21:26:17 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:32.703 21:26:17 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:32.703 21:26:17 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:32.703 21:26:17 -- common/autotest_common.sh@10 -- # set +x 00:04:32.703 ************************************ 00:04:32.703 START TEST json_config_extra_key 00:04:32.703 ************************************ 00:04:32.703 21:26:17 json_config_extra_key -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:32.703 21:26:17 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:32.703 21:26:17 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:32.703 21:26:17 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:32.703 21:26:17 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:32.703 21:26:17 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:32.703 21:26:17 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:32.703 21:26:17 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:32.703 21:26:17 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:32.703 21:26:17 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:32.703 21:26:17 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:32.703 21:26:17 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:32.703 21:26:17 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:32.703 21:26:17 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:04:32.703 21:26:17 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:04:32.703 21:26:17 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:32.703 21:26:17 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:32.703 21:26:17 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:32.703 21:26:17 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:32.703 21:26:17 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:32.703 21:26:17 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:32.703 21:26:17 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:32.703 21:26:17 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:32.703 21:26:17 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:32.703 21:26:17 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:32.703 21:26:17 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:32.703 21:26:17 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:32.703 21:26:17 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:32.703 21:26:17 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:04:32.703 21:26:17 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:32.703 21:26:17 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:32.703 21:26:17 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:32.703 21:26:17 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:32.703 21:26:17 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:32.703 21:26:17 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:32.703 21:26:17 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:32.703 21:26:17 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:32.703 21:26:17 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:32.703 21:26:17 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:32.703 21:26:17 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:32.703 21:26:17 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:32.703 21:26:17 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:32.703 21:26:17 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:32.703 21:26:17 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:32.703 21:26:17 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:32.703 21:26:17 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:32.703 INFO: launching applications... 00:04:32.703 21:26:17 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:32.703 21:26:17 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:32.703 21:26:17 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:32.703 21:26:17 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:32.703 21:26:17 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:32.703 21:26:17 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:32.703 21:26:17 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:32.703 21:26:17 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:32.703 21:26:17 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:32.703 21:26:17 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:32.703 21:26:17 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=59598 00:04:32.703 Waiting for target to run... 00:04:32.703 21:26:17 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:32.703 21:26:17 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 59598 /var/tmp/spdk_tgt.sock 00:04:32.703 21:26:17 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 59598 ']' 00:04:32.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:32.703 21:26:17 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:32.703 21:26:17 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:32.703 21:26:17 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:32.703 21:26:17 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:32.703 21:26:17 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:32.703 21:26:17 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:32.703 [2024-07-24 21:26:17.654455] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:04:32.703 [2024-07-24 21:26:17.654550] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59598 ] 00:04:33.268 [2024-07-24 21:26:18.075818] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:33.268 [2024-07-24 21:26:18.182817] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:33.268 [2024-07-24 21:26:18.203221] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:33.834 21:26:18 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:33.834 00:04:33.834 21:26:18 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:04:33.834 21:26:18 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:33.834 INFO: shutting down applications... 00:04:33.834 21:26:18 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:33.834 21:26:18 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:33.834 21:26:18 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:33.834 21:26:18 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:33.834 21:26:18 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 59598 ]] 00:04:33.834 21:26:18 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 59598 00:04:33.834 21:26:18 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:33.834 21:26:18 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:33.834 21:26:18 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59598 00:04:33.834 21:26:18 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:34.401 21:26:19 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:34.401 21:26:19 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:34.401 21:26:19 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59598 00:04:34.401 21:26:19 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:34.659 21:26:19 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:34.659 21:26:19 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:34.659 21:26:19 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59598 00:04:34.659 21:26:19 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:34.659 21:26:19 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:34.659 21:26:19 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:34.659 SPDK target shutdown done 00:04:34.659 21:26:19 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:34.659 Success 00:04:34.659 21:26:19 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:34.659 ************************************ 00:04:34.659 END TEST json_config_extra_key 00:04:34.659 ************************************ 00:04:34.659 00:04:34.659 real 0m2.072s 00:04:34.659 user 0m1.579s 00:04:34.659 sys 0m0.432s 00:04:34.659 21:26:19 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:34.659 21:26:19 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:34.659 21:26:19 -- spdk/autotest.sh@174 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:34.659 21:26:19 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:34.659 21:26:19 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:34.659 21:26:19 -- common/autotest_common.sh@10 -- # set +x 00:04:34.659 ************************************ 00:04:34.659 START TEST alias_rpc 00:04:34.659 ************************************ 00:04:34.659 21:26:19 alias_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:34.917 * Looking for test storage... 00:04:34.917 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:34.917 21:26:19 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:34.917 21:26:19 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=59669 00:04:34.917 21:26:19 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 59669 00:04:34.917 21:26:19 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 59669 ']' 00:04:34.917 21:26:19 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:34.917 21:26:19 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:34.917 21:26:19 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:34.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:34.917 21:26:19 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:34.917 21:26:19 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:34.917 21:26:19 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:34.917 [2024-07-24 21:26:19.806167] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:04:34.917 [2024-07-24 21:26:19.806262] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59669 ] 00:04:35.176 [2024-07-24 21:26:19.940552] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:35.176 [2024-07-24 21:26:20.062237] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:35.176 [2024-07-24 21:26:20.133433] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:35.755 21:26:20 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:35.755 21:26:20 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:04:35.756 21:26:20 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:36.013 21:26:21 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 59669 00:04:36.013 21:26:21 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 59669 ']' 00:04:36.013 21:26:21 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 59669 00:04:36.013 21:26:21 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:04:36.013 21:26:21 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:36.013 21:26:21 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59669 00:04:36.271 killing process with pid 59669 00:04:36.271 21:26:21 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:36.271 21:26:21 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:36.271 21:26:21 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59669' 00:04:36.271 21:26:21 alias_rpc -- common/autotest_common.sh@969 -- # kill 59669 00:04:36.271 21:26:21 alias_rpc -- common/autotest_common.sh@974 -- # wait 59669 00:04:36.837 ************************************ 00:04:36.837 END TEST alias_rpc 00:04:36.838 ************************************ 00:04:36.838 00:04:36.838 real 0m1.912s 00:04:36.838 user 0m2.028s 00:04:36.838 sys 0m0.506s 00:04:36.838 21:26:21 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:36.838 21:26:21 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:36.838 21:26:21 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:04:36.838 21:26:21 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:36.838 21:26:21 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:36.838 21:26:21 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:36.838 21:26:21 -- common/autotest_common.sh@10 -- # set +x 00:04:36.838 ************************************ 00:04:36.838 START TEST spdkcli_tcp 00:04:36.838 ************************************ 00:04:36.838 21:26:21 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:36.838 * Looking for test storage... 00:04:36.838 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:04:36.838 21:26:21 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:04:36.838 21:26:21 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:04:36.838 21:26:21 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:04:36.838 21:26:21 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:36.838 21:26:21 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:36.838 21:26:21 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:36.838 21:26:21 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:36.838 21:26:21 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:36.838 21:26:21 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:36.838 21:26:21 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=59745 00:04:36.838 21:26:21 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 59745 00:04:36.838 21:26:21 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:36.838 21:26:21 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 59745 ']' 00:04:36.838 21:26:21 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:36.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:36.838 21:26:21 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:36.838 21:26:21 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:36.838 21:26:21 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:36.838 21:26:21 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:36.838 [2024-07-24 21:26:21.757982] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:04:36.838 [2024-07-24 21:26:21.758956] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59745 ] 00:04:37.096 [2024-07-24 21:26:21.898426] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:37.096 [2024-07-24 21:26:22.036050] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:37.096 [2024-07-24 21:26:22.036065] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:37.354 [2024-07-24 21:26:22.108648] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:37.921 21:26:22 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:37.921 21:26:22 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:04:37.921 21:26:22 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:37.921 21:26:22 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=59762 00:04:37.921 21:26:22 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:38.180 [ 00:04:38.180 "bdev_malloc_delete", 00:04:38.180 "bdev_malloc_create", 00:04:38.180 "bdev_null_resize", 00:04:38.180 "bdev_null_delete", 00:04:38.180 "bdev_null_create", 00:04:38.180 "bdev_nvme_cuse_unregister", 00:04:38.180 "bdev_nvme_cuse_register", 00:04:38.180 "bdev_opal_new_user", 00:04:38.180 "bdev_opal_set_lock_state", 00:04:38.180 "bdev_opal_delete", 00:04:38.180 "bdev_opal_get_info", 00:04:38.180 "bdev_opal_create", 00:04:38.180 "bdev_nvme_opal_revert", 00:04:38.180 "bdev_nvme_opal_init", 00:04:38.180 "bdev_nvme_send_cmd", 00:04:38.180 "bdev_nvme_get_path_iostat", 00:04:38.180 "bdev_nvme_get_mdns_discovery_info", 00:04:38.180 "bdev_nvme_stop_mdns_discovery", 00:04:38.180 "bdev_nvme_start_mdns_discovery", 00:04:38.180 "bdev_nvme_set_multipath_policy", 00:04:38.180 "bdev_nvme_set_preferred_path", 00:04:38.180 "bdev_nvme_get_io_paths", 00:04:38.180 "bdev_nvme_remove_error_injection", 00:04:38.180 "bdev_nvme_add_error_injection", 00:04:38.180 "bdev_nvme_get_discovery_info", 00:04:38.180 "bdev_nvme_stop_discovery", 00:04:38.180 "bdev_nvme_start_discovery", 00:04:38.180 "bdev_nvme_get_controller_health_info", 00:04:38.180 "bdev_nvme_disable_controller", 00:04:38.180 "bdev_nvme_enable_controller", 00:04:38.180 "bdev_nvme_reset_controller", 00:04:38.180 "bdev_nvme_get_transport_statistics", 00:04:38.180 "bdev_nvme_apply_firmware", 00:04:38.180 "bdev_nvme_detach_controller", 00:04:38.180 "bdev_nvme_get_controllers", 00:04:38.180 "bdev_nvme_attach_controller", 00:04:38.180 "bdev_nvme_set_hotplug", 00:04:38.180 "bdev_nvme_set_options", 00:04:38.180 "bdev_passthru_delete", 00:04:38.180 "bdev_passthru_create", 00:04:38.180 "bdev_lvol_set_parent_bdev", 00:04:38.180 "bdev_lvol_set_parent", 00:04:38.180 "bdev_lvol_check_shallow_copy", 00:04:38.180 "bdev_lvol_start_shallow_copy", 00:04:38.180 "bdev_lvol_grow_lvstore", 00:04:38.180 "bdev_lvol_get_lvols", 00:04:38.180 "bdev_lvol_get_lvstores", 00:04:38.180 "bdev_lvol_delete", 00:04:38.180 "bdev_lvol_set_read_only", 00:04:38.180 "bdev_lvol_resize", 00:04:38.180 "bdev_lvol_decouple_parent", 00:04:38.180 "bdev_lvol_inflate", 00:04:38.180 "bdev_lvol_rename", 00:04:38.180 "bdev_lvol_clone_bdev", 00:04:38.180 "bdev_lvol_clone", 00:04:38.180 "bdev_lvol_snapshot", 00:04:38.180 "bdev_lvol_create", 00:04:38.180 "bdev_lvol_delete_lvstore", 00:04:38.180 "bdev_lvol_rename_lvstore", 00:04:38.180 "bdev_lvol_create_lvstore", 00:04:38.180 "bdev_raid_set_options", 00:04:38.180 "bdev_raid_remove_base_bdev", 00:04:38.181 "bdev_raid_add_base_bdev", 00:04:38.181 "bdev_raid_delete", 00:04:38.181 "bdev_raid_create", 00:04:38.181 "bdev_raid_get_bdevs", 00:04:38.181 "bdev_error_inject_error", 00:04:38.181 "bdev_error_delete", 00:04:38.181 "bdev_error_create", 00:04:38.181 "bdev_split_delete", 00:04:38.181 "bdev_split_create", 00:04:38.181 "bdev_delay_delete", 00:04:38.181 "bdev_delay_create", 00:04:38.181 "bdev_delay_update_latency", 00:04:38.181 "bdev_zone_block_delete", 00:04:38.181 "bdev_zone_block_create", 00:04:38.181 "blobfs_create", 00:04:38.181 "blobfs_detect", 00:04:38.181 "blobfs_set_cache_size", 00:04:38.181 "bdev_aio_delete", 00:04:38.181 "bdev_aio_rescan", 00:04:38.181 "bdev_aio_create", 00:04:38.181 "bdev_ftl_set_property", 00:04:38.181 "bdev_ftl_get_properties", 00:04:38.181 "bdev_ftl_get_stats", 00:04:38.181 "bdev_ftl_unmap", 00:04:38.181 "bdev_ftl_unload", 00:04:38.181 "bdev_ftl_delete", 00:04:38.181 "bdev_ftl_load", 00:04:38.181 "bdev_ftl_create", 00:04:38.181 "bdev_virtio_attach_controller", 00:04:38.181 "bdev_virtio_scsi_get_devices", 00:04:38.181 "bdev_virtio_detach_controller", 00:04:38.181 "bdev_virtio_blk_set_hotplug", 00:04:38.181 "bdev_iscsi_delete", 00:04:38.181 "bdev_iscsi_create", 00:04:38.181 "bdev_iscsi_set_options", 00:04:38.181 "bdev_uring_delete", 00:04:38.181 "bdev_uring_rescan", 00:04:38.181 "bdev_uring_create", 00:04:38.181 "accel_error_inject_error", 00:04:38.181 "ioat_scan_accel_module", 00:04:38.181 "dsa_scan_accel_module", 00:04:38.181 "iaa_scan_accel_module", 00:04:38.181 "keyring_file_remove_key", 00:04:38.181 "keyring_file_add_key", 00:04:38.181 "keyring_linux_set_options", 00:04:38.181 "iscsi_get_histogram", 00:04:38.181 "iscsi_enable_histogram", 00:04:38.181 "iscsi_set_options", 00:04:38.181 "iscsi_get_auth_groups", 00:04:38.181 "iscsi_auth_group_remove_secret", 00:04:38.181 "iscsi_auth_group_add_secret", 00:04:38.181 "iscsi_delete_auth_group", 00:04:38.181 "iscsi_create_auth_group", 00:04:38.181 "iscsi_set_discovery_auth", 00:04:38.181 "iscsi_get_options", 00:04:38.181 "iscsi_target_node_request_logout", 00:04:38.181 "iscsi_target_node_set_redirect", 00:04:38.181 "iscsi_target_node_set_auth", 00:04:38.181 "iscsi_target_node_add_lun", 00:04:38.181 "iscsi_get_stats", 00:04:38.181 "iscsi_get_connections", 00:04:38.181 "iscsi_portal_group_set_auth", 00:04:38.181 "iscsi_start_portal_group", 00:04:38.181 "iscsi_delete_portal_group", 00:04:38.181 "iscsi_create_portal_group", 00:04:38.181 "iscsi_get_portal_groups", 00:04:38.181 "iscsi_delete_target_node", 00:04:38.181 "iscsi_target_node_remove_pg_ig_maps", 00:04:38.181 "iscsi_target_node_add_pg_ig_maps", 00:04:38.181 "iscsi_create_target_node", 00:04:38.181 "iscsi_get_target_nodes", 00:04:38.181 "iscsi_delete_initiator_group", 00:04:38.181 "iscsi_initiator_group_remove_initiators", 00:04:38.181 "iscsi_initiator_group_add_initiators", 00:04:38.181 "iscsi_create_initiator_group", 00:04:38.181 "iscsi_get_initiator_groups", 00:04:38.181 "nvmf_set_crdt", 00:04:38.181 "nvmf_set_config", 00:04:38.181 "nvmf_set_max_subsystems", 00:04:38.181 "nvmf_stop_mdns_prr", 00:04:38.181 "nvmf_publish_mdns_prr", 00:04:38.181 "nvmf_subsystem_get_listeners", 00:04:38.181 "nvmf_subsystem_get_qpairs", 00:04:38.181 "nvmf_subsystem_get_controllers", 00:04:38.181 "nvmf_get_stats", 00:04:38.181 "nvmf_get_transports", 00:04:38.181 "nvmf_create_transport", 00:04:38.181 "nvmf_get_targets", 00:04:38.181 "nvmf_delete_target", 00:04:38.181 "nvmf_create_target", 00:04:38.181 "nvmf_subsystem_allow_any_host", 00:04:38.181 "nvmf_subsystem_remove_host", 00:04:38.181 "nvmf_subsystem_add_host", 00:04:38.181 "nvmf_ns_remove_host", 00:04:38.181 "nvmf_ns_add_host", 00:04:38.181 "nvmf_subsystem_remove_ns", 00:04:38.181 "nvmf_subsystem_add_ns", 00:04:38.181 "nvmf_subsystem_listener_set_ana_state", 00:04:38.181 "nvmf_discovery_get_referrals", 00:04:38.181 "nvmf_discovery_remove_referral", 00:04:38.181 "nvmf_discovery_add_referral", 00:04:38.181 "nvmf_subsystem_remove_listener", 00:04:38.181 "nvmf_subsystem_add_listener", 00:04:38.181 "nvmf_delete_subsystem", 00:04:38.181 "nvmf_create_subsystem", 00:04:38.181 "nvmf_get_subsystems", 00:04:38.181 "env_dpdk_get_mem_stats", 00:04:38.181 "nbd_get_disks", 00:04:38.181 "nbd_stop_disk", 00:04:38.181 "nbd_start_disk", 00:04:38.181 "ublk_recover_disk", 00:04:38.181 "ublk_get_disks", 00:04:38.181 "ublk_stop_disk", 00:04:38.181 "ublk_start_disk", 00:04:38.181 "ublk_destroy_target", 00:04:38.181 "ublk_create_target", 00:04:38.181 "virtio_blk_create_transport", 00:04:38.181 "virtio_blk_get_transports", 00:04:38.181 "vhost_controller_set_coalescing", 00:04:38.181 "vhost_get_controllers", 00:04:38.181 "vhost_delete_controller", 00:04:38.181 "vhost_create_blk_controller", 00:04:38.181 "vhost_scsi_controller_remove_target", 00:04:38.181 "vhost_scsi_controller_add_target", 00:04:38.181 "vhost_start_scsi_controller", 00:04:38.181 "vhost_create_scsi_controller", 00:04:38.181 "thread_set_cpumask", 00:04:38.181 "framework_get_governor", 00:04:38.181 "framework_get_scheduler", 00:04:38.181 "framework_set_scheduler", 00:04:38.181 "framework_get_reactors", 00:04:38.181 "thread_get_io_channels", 00:04:38.181 "thread_get_pollers", 00:04:38.181 "thread_get_stats", 00:04:38.181 "framework_monitor_context_switch", 00:04:38.181 "spdk_kill_instance", 00:04:38.181 "log_enable_timestamps", 00:04:38.181 "log_get_flags", 00:04:38.181 "log_clear_flag", 00:04:38.181 "log_set_flag", 00:04:38.181 "log_get_level", 00:04:38.181 "log_set_level", 00:04:38.181 "log_get_print_level", 00:04:38.181 "log_set_print_level", 00:04:38.181 "framework_enable_cpumask_locks", 00:04:38.181 "framework_disable_cpumask_locks", 00:04:38.181 "framework_wait_init", 00:04:38.181 "framework_start_init", 00:04:38.181 "scsi_get_devices", 00:04:38.181 "bdev_get_histogram", 00:04:38.181 "bdev_enable_histogram", 00:04:38.181 "bdev_set_qos_limit", 00:04:38.181 "bdev_set_qd_sampling_period", 00:04:38.181 "bdev_get_bdevs", 00:04:38.181 "bdev_reset_iostat", 00:04:38.181 "bdev_get_iostat", 00:04:38.181 "bdev_examine", 00:04:38.181 "bdev_wait_for_examine", 00:04:38.181 "bdev_set_options", 00:04:38.181 "notify_get_notifications", 00:04:38.181 "notify_get_types", 00:04:38.181 "accel_get_stats", 00:04:38.181 "accel_set_options", 00:04:38.181 "accel_set_driver", 00:04:38.181 "accel_crypto_key_destroy", 00:04:38.181 "accel_crypto_keys_get", 00:04:38.181 "accel_crypto_key_create", 00:04:38.181 "accel_assign_opc", 00:04:38.181 "accel_get_module_info", 00:04:38.181 "accel_get_opc_assignments", 00:04:38.181 "vmd_rescan", 00:04:38.181 "vmd_remove_device", 00:04:38.181 "vmd_enable", 00:04:38.181 "sock_get_default_impl", 00:04:38.181 "sock_set_default_impl", 00:04:38.181 "sock_impl_set_options", 00:04:38.181 "sock_impl_get_options", 00:04:38.181 "iobuf_get_stats", 00:04:38.181 "iobuf_set_options", 00:04:38.181 "framework_get_pci_devices", 00:04:38.181 "framework_get_config", 00:04:38.181 "framework_get_subsystems", 00:04:38.181 "trace_get_info", 00:04:38.181 "trace_get_tpoint_group_mask", 00:04:38.181 "trace_disable_tpoint_group", 00:04:38.181 "trace_enable_tpoint_group", 00:04:38.181 "trace_clear_tpoint_mask", 00:04:38.181 "trace_set_tpoint_mask", 00:04:38.181 "keyring_get_keys", 00:04:38.181 "spdk_get_version", 00:04:38.181 "rpc_get_methods" 00:04:38.181 ] 00:04:38.181 21:26:22 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:38.181 21:26:22 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:38.181 21:26:22 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:38.181 21:26:23 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:38.181 21:26:23 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 59745 00:04:38.181 21:26:23 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 59745 ']' 00:04:38.181 21:26:23 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 59745 00:04:38.181 21:26:23 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:04:38.181 21:26:23 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:38.181 21:26:23 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59745 00:04:38.181 killing process with pid 59745 00:04:38.181 21:26:23 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:38.181 21:26:23 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:38.181 21:26:23 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59745' 00:04:38.181 21:26:23 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 59745 00:04:38.181 21:26:23 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 59745 00:04:38.747 00:04:38.747 real 0m1.977s 00:04:38.747 user 0m3.576s 00:04:38.747 sys 0m0.526s 00:04:38.747 21:26:23 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:38.747 21:26:23 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:38.747 ************************************ 00:04:38.747 END TEST spdkcli_tcp 00:04:38.747 ************************************ 00:04:38.747 21:26:23 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:38.747 21:26:23 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:38.747 21:26:23 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:38.747 21:26:23 -- common/autotest_common.sh@10 -- # set +x 00:04:38.747 ************************************ 00:04:38.747 START TEST dpdk_mem_utility 00:04:38.747 ************************************ 00:04:38.747 21:26:23 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:38.747 * Looking for test storage... 00:04:38.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:38.747 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:04:38.747 21:26:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:38.747 21:26:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=59836 00:04:38.747 21:26:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 59836 00:04:38.747 21:26:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:38.747 21:26:23 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 59836 ']' 00:04:38.747 21:26:23 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:38.747 21:26:23 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:38.747 21:26:23 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:38.747 21:26:23 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:38.747 21:26:23 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:39.005 [2024-07-24 21:26:23.768509] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:04:39.005 [2024-07-24 21:26:23.768609] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59836 ] 00:04:39.005 [2024-07-24 21:26:23.905144] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:39.262 [2024-07-24 21:26:24.028271] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:39.262 [2024-07-24 21:26:24.097572] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:39.828 21:26:24 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:39.828 21:26:24 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:04:39.828 21:26:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:39.828 21:26:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:39.828 21:26:24 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:39.828 21:26:24 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:39.828 { 00:04:39.828 "filename": "/tmp/spdk_mem_dump.txt" 00:04:39.828 } 00:04:39.828 21:26:24 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:39.828 21:26:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:39.828 DPDK memory size 814.000000 MiB in 1 heap(s) 00:04:39.828 1 heaps totaling size 814.000000 MiB 00:04:39.828 size: 814.000000 MiB heap id: 0 00:04:39.828 end heaps---------- 00:04:39.828 8 mempools totaling size 598.116089 MiB 00:04:39.828 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:39.828 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:39.828 size: 84.521057 MiB name: bdev_io_59836 00:04:39.828 size: 51.011292 MiB name: evtpool_59836 00:04:39.828 size: 50.003479 MiB name: msgpool_59836 00:04:39.828 size: 21.763794 MiB name: PDU_Pool 00:04:39.828 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:39.828 size: 0.026123 MiB name: Session_Pool 00:04:39.828 end mempools------- 00:04:39.828 6 memzones totaling size 4.142822 MiB 00:04:39.828 size: 1.000366 MiB name: RG_ring_0_59836 00:04:39.828 size: 1.000366 MiB name: RG_ring_1_59836 00:04:39.828 size: 1.000366 MiB name: RG_ring_4_59836 00:04:39.828 size: 1.000366 MiB name: RG_ring_5_59836 00:04:39.828 size: 0.125366 MiB name: RG_ring_2_59836 00:04:39.828 size: 0.015991 MiB name: RG_ring_3_59836 00:04:39.828 end memzones------- 00:04:39.828 21:26:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:04:40.088 heap id: 0 total size: 814.000000 MiB number of busy elements: 298 number of free elements: 15 00:04:40.088 list of free elements. size: 12.472290 MiB 00:04:40.088 element at address: 0x200000400000 with size: 1.999512 MiB 00:04:40.088 element at address: 0x200018e00000 with size: 0.999878 MiB 00:04:40.088 element at address: 0x200019000000 with size: 0.999878 MiB 00:04:40.088 element at address: 0x200003e00000 with size: 0.996277 MiB 00:04:40.088 element at address: 0x200031c00000 with size: 0.994446 MiB 00:04:40.088 element at address: 0x200013800000 with size: 0.978699 MiB 00:04:40.088 element at address: 0x200007000000 with size: 0.959839 MiB 00:04:40.088 element at address: 0x200019200000 with size: 0.936584 MiB 00:04:40.088 element at address: 0x200000200000 with size: 0.833191 MiB 00:04:40.088 element at address: 0x20001aa00000 with size: 0.568604 MiB 00:04:40.088 element at address: 0x20000b200000 with size: 0.489624 MiB 00:04:40.088 element at address: 0x200000800000 with size: 0.486145 MiB 00:04:40.088 element at address: 0x200019400000 with size: 0.485657 MiB 00:04:40.088 element at address: 0x200027e00000 with size: 0.396118 MiB 00:04:40.088 element at address: 0x200003a00000 with size: 0.347839 MiB 00:04:40.088 list of standard malloc elements. size: 199.265137 MiB 00:04:40.088 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:04:40.088 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:04:40.088 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:40.088 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:04:40.088 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:40.088 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:40.088 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:04:40.088 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:40.088 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:04:40.088 element at address: 0x2000002d54c0 with size: 0.000183 MiB 00:04:40.088 element at address: 0x2000002d5580 with size: 0.000183 MiB 00:04:40.088 element at address: 0x2000002d5640 with size: 0.000183 MiB 00:04:40.088 element at address: 0x2000002d5700 with size: 0.000183 MiB 00:04:40.088 element at address: 0x2000002d57c0 with size: 0.000183 MiB 00:04:40.088 element at address: 0x2000002d5880 with size: 0.000183 MiB 00:04:40.088 element at address: 0x2000002d5940 with size: 0.000183 MiB 00:04:40.088 element at address: 0x2000002d5a00 with size: 0.000183 MiB 00:04:40.088 element at address: 0x2000002d5ac0 with size: 0.000183 MiB 00:04:40.088 element at address: 0x2000002d5b80 with size: 0.000183 MiB 00:04:40.088 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:04:40.088 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:04:40.088 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:04:40.088 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:04:40.088 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:04:40.088 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:04:40.088 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:04:40.088 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:04:40.088 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:04:40.088 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:04:40.088 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:04:40.088 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:04:40.088 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:04:40.088 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:04:40.088 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:04:40.088 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:04:40.088 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:04:40.088 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:04:40.088 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:04:40.088 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:04:40.088 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:04:40.088 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:04:40.088 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:04:40.088 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:04:40.088 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:04:40.088 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:04:40.088 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:04:40.088 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:04:40.088 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:04:40.088 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:04:40.088 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:04:40.088 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:04:40.088 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:04:40.088 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:04:40.088 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:04:40.088 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:04:40.088 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:04:40.088 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:04:40.088 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:04:40.088 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:04:40.088 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:04:40.088 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:40.088 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:40.088 element at address: 0x20000087c740 with size: 0.000183 MiB 00:04:40.088 element at address: 0x20000087c800 with size: 0.000183 MiB 00:04:40.088 element at address: 0x20000087c8c0 with size: 0.000183 MiB 00:04:40.088 element at address: 0x20000087c980 with size: 0.000183 MiB 00:04:40.088 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:04:40.088 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:04:40.088 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:04:40.088 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:04:40.088 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:04:40.088 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:04:40.088 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:04:40.088 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:04:40.088 element at address: 0x200003a590c0 with size: 0.000183 MiB 00:04:40.088 element at address: 0x200003a59180 with size: 0.000183 MiB 00:04:40.088 element at address: 0x200003a59240 with size: 0.000183 MiB 00:04:40.088 element at address: 0x200003a59300 with size: 0.000183 MiB 00:04:40.088 element at address: 0x200003a593c0 with size: 0.000183 MiB 00:04:40.088 element at address: 0x200003a59480 with size: 0.000183 MiB 00:04:40.088 element at address: 0x200003a59540 with size: 0.000183 MiB 00:04:40.088 element at address: 0x200003a59600 with size: 0.000183 MiB 00:04:40.088 element at address: 0x200003a596c0 with size: 0.000183 MiB 00:04:40.088 element at address: 0x200003a59780 with size: 0.000183 MiB 00:04:40.088 element at address: 0x200003a59840 with size: 0.000183 MiB 00:04:40.088 element at address: 0x200003a59900 with size: 0.000183 MiB 00:04:40.088 element at address: 0x200003a599c0 with size: 0.000183 MiB 00:04:40.088 element at address: 0x200003a59a80 with size: 0.000183 MiB 00:04:40.088 element at address: 0x200003a59b40 with size: 0.000183 MiB 00:04:40.088 element at address: 0x200003a59c00 with size: 0.000183 MiB 00:04:40.088 element at address: 0x200003a59cc0 with size: 0.000183 MiB 00:04:40.088 element at address: 0x200003a59d80 with size: 0.000183 MiB 00:04:40.088 element at address: 0x200003a59e40 with size: 0.000183 MiB 00:04:40.088 element at address: 0x200003a59f00 with size: 0.000183 MiB 00:04:40.088 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:04:40.088 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:04:40.088 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:04:40.088 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:04:40.088 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:04:40.088 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:04:40.088 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:04:40.088 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:04:40.088 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:04:40.088 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:04:40.088 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:04:40.088 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:04:40.088 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:04:40.088 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:04:40.088 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:04:40.088 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:04:40.088 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:04:40.088 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:04:40.088 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:04:40.088 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:04:40.089 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:04:40.089 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:04:40.089 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:04:40.089 element at address: 0x200003adb300 with size: 0.000183 MiB 00:04:40.089 element at address: 0x200003adb500 with size: 0.000183 MiB 00:04:40.089 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:04:40.089 element at address: 0x200003affa80 with size: 0.000183 MiB 00:04:40.089 element at address: 0x200003affb40 with size: 0.000183 MiB 00:04:40.089 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:04:40.089 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:04:40.089 element at address: 0x20000b27d580 with size: 0.000183 MiB 00:04:40.089 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:04:40.089 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:04:40.089 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:04:40.089 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:04:40.089 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:04:40.089 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:04:40.089 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:04:40.089 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:04:40.089 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:04:40.089 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:04:40.089 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:04:40.089 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:04:40.089 element at address: 0x20001aa91900 with size: 0.000183 MiB 00:04:40.089 element at address: 0x20001aa919c0 with size: 0.000183 MiB 00:04:40.089 element at address: 0x20001aa91a80 with size: 0.000183 MiB 00:04:40.089 element at address: 0x20001aa91b40 with size: 0.000183 MiB 00:04:40.089 element at address: 0x20001aa91c00 with size: 0.000183 MiB 00:04:40.089 element at address: 0x20001aa91cc0 with size: 0.000183 MiB 00:04:40.089 element at address: 0x20001aa91d80 with size: 0.000183 MiB 00:04:40.089 element at address: 0x20001aa91e40 with size: 0.000183 MiB 00:04:40.089 element at address: 0x20001aa91f00 with size: 0.000183 MiB 00:04:40.089 element at address: 0x20001aa91fc0 with size: 0.000183 MiB 00:04:40.089 element at address: 0x20001aa92080 with size: 0.000183 MiB 00:04:40.089 element at address: 0x20001aa92140 with size: 0.000183 MiB 00:04:40.089 element at address: 0x20001aa92200 with size: 0.000183 MiB 00:04:40.089 element at address: 0x20001aa922c0 with size: 0.000183 MiB 00:04:40.089 element at address: 0x20001aa92380 with size: 0.000183 MiB 00:04:40.089 element at address: 0x20001aa92440 with size: 0.000183 MiB 00:04:40.089 element at address: 0x20001aa92500 with size: 0.000183 MiB 00:04:40.089 element at address: 0x20001aa925c0 with size: 0.000183 MiB 00:04:40.089 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:04:40.089 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:04:40.089 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:04:40.089 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:04:40.089 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:04:40.089 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:04:40.089 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:04:40.089 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:04:40.089 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:04:40.089 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:04:40.089 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:04:40.089 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:04:40.089 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:04:40.089 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:04:40.089 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:04:40.089 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:04:40.089 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:04:40.089 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:04:40.089 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:04:40.089 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:04:40.089 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:04:40.089 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:04:40.089 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:04:40.089 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:04:40.089 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:04:40.089 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:04:40.089 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:04:40.089 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:04:40.089 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:04:40.089 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:04:40.089 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:04:40.089 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:04:40.089 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:04:40.089 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:04:40.089 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:04:40.089 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:04:40.089 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:04:40.089 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:04:40.089 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:04:40.089 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:04:40.089 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:04:40.089 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:04:40.089 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:04:40.089 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:04:40.089 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:04:40.089 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:04:40.089 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:04:40.089 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:04:40.089 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:04:40.089 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:04:40.089 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:04:40.089 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:04:40.089 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:04:40.089 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:04:40.089 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:04:40.089 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:04:40.089 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:04:40.089 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:04:40.089 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:04:40.089 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:04:40.089 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:04:40.089 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:04:40.089 element at address: 0x200027e65680 with size: 0.000183 MiB 00:04:40.089 element at address: 0x200027e65740 with size: 0.000183 MiB 00:04:40.089 element at address: 0x200027e6c340 with size: 0.000183 MiB 00:04:40.089 element at address: 0x200027e6c540 with size: 0.000183 MiB 00:04:40.089 element at address: 0x200027e6c600 with size: 0.000183 MiB 00:04:40.089 element at address: 0x200027e6c6c0 with size: 0.000183 MiB 00:04:40.089 element at address: 0x200027e6c780 with size: 0.000183 MiB 00:04:40.089 element at address: 0x200027e6c840 with size: 0.000183 MiB 00:04:40.089 element at address: 0x200027e6c900 with size: 0.000183 MiB 00:04:40.089 element at address: 0x200027e6c9c0 with size: 0.000183 MiB 00:04:40.089 element at address: 0x200027e6ca80 with size: 0.000183 MiB 00:04:40.089 element at address: 0x200027e6cb40 with size: 0.000183 MiB 00:04:40.089 element at address: 0x200027e6cc00 with size: 0.000183 MiB 00:04:40.089 element at address: 0x200027e6ccc0 with size: 0.000183 MiB 00:04:40.089 element at address: 0x200027e6cd80 with size: 0.000183 MiB 00:04:40.089 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:04:40.089 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:04:40.089 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:04:40.089 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:04:40.089 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:04:40.089 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:04:40.089 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:04:40.089 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:04:40.089 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:04:40.089 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:04:40.089 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:04:40.089 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:04:40.089 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:04:40.089 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:04:40.089 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:04:40.089 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:04:40.089 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:04:40.089 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:04:40.089 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:04:40.089 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:04:40.089 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:04:40.089 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:04:40.089 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:04:40.089 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:04:40.089 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:04:40.089 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:04:40.089 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:04:40.089 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:04:40.089 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:04:40.089 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:04:40.089 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:04:40.089 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:04:40.089 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:04:40.090 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:04:40.090 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:04:40.090 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:04:40.090 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:04:40.090 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:04:40.090 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:04:40.090 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:04:40.090 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:04:40.090 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:04:40.090 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:04:40.090 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:04:40.090 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:04:40.090 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:04:40.090 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:04:40.090 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:04:40.090 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:04:40.090 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:04:40.090 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:04:40.090 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:04:40.090 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:04:40.090 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:04:40.090 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:04:40.090 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:04:40.090 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:04:40.090 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:04:40.090 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:04:40.090 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:04:40.090 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:04:40.090 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:04:40.090 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:04:40.090 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:04:40.090 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:04:40.090 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:04:40.090 list of memzone associated elements. size: 602.262573 MiB 00:04:40.090 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:04:40.090 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:40.090 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:04:40.090 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:40.090 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:04:40.090 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_59836_0 00:04:40.090 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:04:40.090 associated memzone info: size: 48.002930 MiB name: MP_evtpool_59836_0 00:04:40.090 element at address: 0x200003fff380 with size: 48.003052 MiB 00:04:40.090 associated memzone info: size: 48.002930 MiB name: MP_msgpool_59836_0 00:04:40.090 element at address: 0x2000195be940 with size: 20.255554 MiB 00:04:40.090 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:40.090 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:04:40.090 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:40.090 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:04:40.090 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_59836 00:04:40.090 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:04:40.090 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_59836 00:04:40.090 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:40.090 associated memzone info: size: 1.007996 MiB name: MP_evtpool_59836 00:04:40.090 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:04:40.090 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:40.090 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:04:40.090 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:40.090 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:04:40.090 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:40.090 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:04:40.090 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:40.090 element at address: 0x200003eff180 with size: 1.000488 MiB 00:04:40.090 associated memzone info: size: 1.000366 MiB name: RG_ring_0_59836 00:04:40.090 element at address: 0x200003affc00 with size: 1.000488 MiB 00:04:40.090 associated memzone info: size: 1.000366 MiB name: RG_ring_1_59836 00:04:40.090 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:04:40.090 associated memzone info: size: 1.000366 MiB name: RG_ring_4_59836 00:04:40.090 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:04:40.090 associated memzone info: size: 1.000366 MiB name: RG_ring_5_59836 00:04:40.090 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:04:40.090 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_59836 00:04:40.090 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:04:40.090 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:40.090 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:04:40.090 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:40.090 element at address: 0x20001947c540 with size: 0.250488 MiB 00:04:40.090 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:40.090 element at address: 0x200003adf880 with size: 0.125488 MiB 00:04:40.090 associated memzone info: size: 0.125366 MiB name: RG_ring_2_59836 00:04:40.090 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:04:40.090 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:40.090 element at address: 0x200027e65800 with size: 0.023743 MiB 00:04:40.090 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:40.090 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:04:40.090 associated memzone info: size: 0.015991 MiB name: RG_ring_3_59836 00:04:40.090 element at address: 0x200027e6b940 with size: 0.002441 MiB 00:04:40.090 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:40.090 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:04:40.090 associated memzone info: size: 0.000183 MiB name: MP_msgpool_59836 00:04:40.090 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:04:40.090 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_59836 00:04:40.090 element at address: 0x200027e6c400 with size: 0.000305 MiB 00:04:40.090 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:40.090 21:26:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:40.090 21:26:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 59836 00:04:40.090 21:26:24 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 59836 ']' 00:04:40.090 21:26:24 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 59836 00:04:40.090 21:26:24 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:04:40.090 21:26:24 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:40.091 21:26:24 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59836 00:04:40.091 killing process with pid 59836 00:04:40.091 21:26:24 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:40.091 21:26:24 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:40.091 21:26:24 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59836' 00:04:40.091 21:26:24 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 59836 00:04:40.091 21:26:24 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 59836 00:04:40.657 00:04:40.657 real 0m1.778s 00:04:40.657 user 0m1.864s 00:04:40.657 sys 0m0.453s 00:04:40.657 21:26:25 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:40.657 21:26:25 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:40.657 ************************************ 00:04:40.657 END TEST dpdk_mem_utility 00:04:40.657 ************************************ 00:04:40.657 21:26:25 -- spdk/autotest.sh@181 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:40.657 21:26:25 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:40.657 21:26:25 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:40.657 21:26:25 -- common/autotest_common.sh@10 -- # set +x 00:04:40.657 ************************************ 00:04:40.657 START TEST event 00:04:40.657 ************************************ 00:04:40.657 21:26:25 event -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:40.657 * Looking for test storage... 00:04:40.657 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:40.657 21:26:25 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:04:40.657 21:26:25 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:40.657 21:26:25 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:40.657 21:26:25 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:04:40.657 21:26:25 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:40.657 21:26:25 event -- common/autotest_common.sh@10 -- # set +x 00:04:40.657 ************************************ 00:04:40.657 START TEST event_perf 00:04:40.657 ************************************ 00:04:40.657 21:26:25 event.event_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:40.657 Running I/O for 1 seconds...[2024-07-24 21:26:25.567795] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:04:40.657 [2024-07-24 21:26:25.567878] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59907 ] 00:04:40.915 [2024-07-24 21:26:25.700989] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:40.915 [2024-07-24 21:26:25.835769] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:40.915 [2024-07-24 21:26:25.835918] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:40.915 [2024-07-24 21:26:25.836051] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:04:40.915 [2024-07-24 21:26:25.836053] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.288 Running I/O for 1 seconds... 00:04:42.288 lcore 0: 131469 00:04:42.288 lcore 1: 131466 00:04:42.288 lcore 2: 131467 00:04:42.288 lcore 3: 131469 00:04:42.288 done. 00:04:42.288 00:04:42.288 ************************************ 00:04:42.288 END TEST event_perf 00:04:42.288 ************************************ 00:04:42.288 real 0m1.403s 00:04:42.288 user 0m4.206s 00:04:42.288 sys 0m0.071s 00:04:42.288 21:26:26 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:42.288 21:26:26 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:42.288 21:26:26 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:42.288 21:26:26 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:04:42.288 21:26:26 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:42.288 21:26:26 event -- common/autotest_common.sh@10 -- # set +x 00:04:42.288 ************************************ 00:04:42.288 START TEST event_reactor 00:04:42.288 ************************************ 00:04:42.288 21:26:27 event.event_reactor -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:42.288 [2024-07-24 21:26:27.021998] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:04:42.288 [2024-07-24 21:26:27.022087] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59946 ] 00:04:42.288 [2024-07-24 21:26:27.155243] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:42.288 [2024-07-24 21:26:27.267117] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.687 test_start 00:04:43.687 oneshot 00:04:43.687 tick 100 00:04:43.687 tick 100 00:04:43.687 tick 250 00:04:43.687 tick 100 00:04:43.687 tick 100 00:04:43.687 tick 100 00:04:43.687 tick 250 00:04:43.687 tick 500 00:04:43.687 tick 100 00:04:43.687 tick 100 00:04:43.687 tick 250 00:04:43.687 tick 100 00:04:43.687 tick 100 00:04:43.687 test_end 00:04:43.687 00:04:43.687 real 0m1.361s 00:04:43.687 user 0m1.196s 00:04:43.687 sys 0m0.059s 00:04:43.687 21:26:28 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:43.687 ************************************ 00:04:43.687 END TEST event_reactor 00:04:43.687 ************************************ 00:04:43.687 21:26:28 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:43.687 21:26:28 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:43.687 21:26:28 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:04:43.687 21:26:28 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:43.687 21:26:28 event -- common/autotest_common.sh@10 -- # set +x 00:04:43.687 ************************************ 00:04:43.687 START TEST event_reactor_perf 00:04:43.687 ************************************ 00:04:43.687 21:26:28 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:43.687 [2024-07-24 21:26:28.438116] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:04:43.687 [2024-07-24 21:26:28.438437] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59981 ] 00:04:43.687 [2024-07-24 21:26:28.576030] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:43.687 [2024-07-24 21:26:28.676752] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:45.064 test_start 00:04:45.064 test_end 00:04:45.064 Performance: 436750 events per second 00:04:45.064 00:04:45.064 real 0m1.356s 00:04:45.064 user 0m1.191s 00:04:45.064 sys 0m0.059s 00:04:45.064 21:26:29 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:45.064 21:26:29 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:45.064 ************************************ 00:04:45.064 END TEST event_reactor_perf 00:04:45.064 ************************************ 00:04:45.064 21:26:29 event -- event/event.sh@49 -- # uname -s 00:04:45.064 21:26:29 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:45.064 21:26:29 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:45.064 21:26:29 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:45.064 21:26:29 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:45.064 21:26:29 event -- common/autotest_common.sh@10 -- # set +x 00:04:45.064 ************************************ 00:04:45.064 START TEST event_scheduler 00:04:45.064 ************************************ 00:04:45.064 21:26:29 event.event_scheduler -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:45.064 * Looking for test storage... 00:04:45.064 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:04:45.064 21:26:29 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:45.064 21:26:29 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=60043 00:04:45.064 21:26:29 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:45.064 21:26:29 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:45.064 21:26:29 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 60043 00:04:45.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:45.064 21:26:29 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 60043 ']' 00:04:45.064 21:26:29 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:45.064 21:26:29 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:45.064 21:26:29 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:45.064 21:26:29 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:45.064 21:26:29 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:45.064 [2024-07-24 21:26:29.966852] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:04:45.064 [2024-07-24 21:26:29.966958] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60043 ] 00:04:45.323 [2024-07-24 21:26:30.106014] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:45.323 [2024-07-24 21:26:30.233941] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:45.323 [2024-07-24 21:26:30.234057] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:45.323 [2024-07-24 21:26:30.234188] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:04:45.323 [2024-07-24 21:26:30.234195] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:46.260 21:26:30 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:46.260 21:26:30 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:04:46.260 21:26:30 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:46.260 21:26:30 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:46.260 21:26:30 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:46.260 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:46.260 POWER: Cannot set governor of lcore 0 to userspace 00:04:46.260 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:46.260 POWER: Cannot set governor of lcore 0 to performance 00:04:46.260 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:46.260 POWER: Cannot set governor of lcore 0 to userspace 00:04:46.260 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:46.260 POWER: Cannot set governor of lcore 0 to userspace 00:04:46.260 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:04:46.260 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:04:46.260 POWER: Unable to set Power Management Environment for lcore 0 00:04:46.260 [2024-07-24 21:26:30.974266] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:04:46.260 [2024-07-24 21:26:30.974390] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:04:46.260 [2024-07-24 21:26:30.974492] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:04:46.260 [2024-07-24 21:26:30.974597] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:46.260 [2024-07-24 21:26:30.974749] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:46.260 [2024-07-24 21:26:30.974796] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:46.260 21:26:30 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:46.260 21:26:30 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:46.260 21:26:30 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:46.260 21:26:30 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:46.260 [2024-07-24 21:26:31.035647] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:46.260 [2024-07-24 21:26:31.068524] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:46.260 21:26:31 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:46.260 21:26:31 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:46.260 21:26:31 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:46.260 21:26:31 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:46.260 21:26:31 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:46.260 ************************************ 00:04:46.260 START TEST scheduler_create_thread 00:04:46.260 ************************************ 00:04:46.260 21:26:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:04:46.260 21:26:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:46.260 21:26:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:46.260 21:26:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:46.260 2 00:04:46.260 21:26:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:46.260 21:26:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:46.260 21:26:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:46.260 21:26:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:46.260 3 00:04:46.260 21:26:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:46.260 21:26:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:46.260 21:26:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:46.260 21:26:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:46.260 4 00:04:46.260 21:26:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:46.260 21:26:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:46.260 21:26:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:46.260 21:26:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:46.260 5 00:04:46.260 21:26:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:46.260 21:26:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:46.260 21:26:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:46.260 21:26:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:46.260 6 00:04:46.260 21:26:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:46.260 21:26:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:46.260 21:26:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:46.260 21:26:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:46.260 7 00:04:46.261 21:26:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:46.261 21:26:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:46.261 21:26:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:46.261 21:26:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:46.261 8 00:04:46.261 21:26:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:46.261 21:26:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:46.261 21:26:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:46.261 21:26:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:46.261 9 00:04:46.261 21:26:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:46.261 21:26:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:46.261 21:26:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:46.261 21:26:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:46.261 10 00:04:46.261 21:26:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:46.261 21:26:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:46.261 21:26:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:46.261 21:26:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:46.261 21:26:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:46.261 21:26:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:46.261 21:26:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:46.261 21:26:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:46.261 21:26:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:46.261 21:26:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:46.261 21:26:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:46.261 21:26:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:46.261 21:26:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.164 21:26:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:48.164 21:26:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:48.164 21:26:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:48.164 21:26:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:48.164 21:26:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.730 ************************************ 00:04:48.730 END TEST scheduler_create_thread 00:04:48.730 ************************************ 00:04:48.730 21:26:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:48.730 00:04:48.730 real 0m2.615s 00:04:48.730 user 0m0.018s 00:04:48.730 sys 0m0.006s 00:04:48.730 21:26:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:48.730 21:26:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.988 21:26:33 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:48.988 21:26:33 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 60043 00:04:48.988 21:26:33 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 60043 ']' 00:04:48.988 21:26:33 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 60043 00:04:48.988 21:26:33 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:04:48.988 21:26:33 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:48.988 21:26:33 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60043 00:04:48.988 killing process with pid 60043 00:04:48.988 21:26:33 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:04:48.988 21:26:33 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:04:48.988 21:26:33 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60043' 00:04:48.988 21:26:33 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 60043 00:04:48.988 21:26:33 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 60043 00:04:49.247 [2024-07-24 21:26:34.177228] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:49.505 ************************************ 00:04:49.505 END TEST event_scheduler 00:04:49.505 ************************************ 00:04:49.505 00:04:49.505 real 0m4.609s 00:04:49.505 user 0m8.712s 00:04:49.505 sys 0m0.365s 00:04:49.505 21:26:34 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:49.505 21:26:34 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:49.505 21:26:34 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:49.505 21:26:34 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:49.505 21:26:34 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:49.505 21:26:34 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:49.505 21:26:34 event -- common/autotest_common.sh@10 -- # set +x 00:04:49.505 ************************************ 00:04:49.505 START TEST app_repeat 00:04:49.505 ************************************ 00:04:49.505 21:26:34 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:04:49.505 21:26:34 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:49.505 21:26:34 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:49.505 21:26:34 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:49.505 21:26:34 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:49.505 21:26:34 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:49.505 21:26:34 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:49.505 21:26:34 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:49.505 21:26:34 event.app_repeat -- event/event.sh@19 -- # repeat_pid=60142 00:04:49.505 21:26:34 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:49.505 21:26:34 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:49.505 Process app_repeat pid: 60142 00:04:49.505 21:26:34 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 60142' 00:04:49.505 spdk_app_start Round 0 00:04:49.505 21:26:34 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:49.505 21:26:34 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:49.505 21:26:34 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60142 /var/tmp/spdk-nbd.sock 00:04:49.505 21:26:34 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 60142 ']' 00:04:49.505 21:26:34 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:49.505 21:26:34 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:49.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:49.505 21:26:34 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:49.506 21:26:34 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:49.506 21:26:34 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:49.764 [2024-07-24 21:26:34.521294] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:04:49.764 [2024-07-24 21:26:34.521379] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60142 ] 00:04:49.764 [2024-07-24 21:26:34.651895] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:49.764 [2024-07-24 21:26:34.757014] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:49.764 [2024-07-24 21:26:34.757035] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.022 [2024-07-24 21:26:34.827549] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:50.590 21:26:35 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:50.590 21:26:35 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:04:50.590 21:26:35 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:50.848 Malloc0 00:04:50.848 21:26:35 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:51.141 Malloc1 00:04:51.141 21:26:36 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:51.141 21:26:36 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:51.141 21:26:36 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:51.141 21:26:36 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:51.141 21:26:36 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:51.141 21:26:36 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:51.141 21:26:36 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:51.141 21:26:36 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:51.141 21:26:36 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:51.141 21:26:36 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:51.141 21:26:36 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:51.141 21:26:36 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:51.141 21:26:36 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:51.141 21:26:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:51.141 21:26:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:51.141 21:26:36 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:51.399 /dev/nbd0 00:04:51.399 21:26:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:51.399 21:26:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:51.399 21:26:36 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:04:51.399 21:26:36 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:51.399 21:26:36 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:51.399 21:26:36 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:51.399 21:26:36 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:04:51.399 21:26:36 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:51.399 21:26:36 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:51.399 21:26:36 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:51.399 21:26:36 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:51.399 1+0 records in 00:04:51.399 1+0 records out 00:04:51.399 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000707626 s, 5.8 MB/s 00:04:51.399 21:26:36 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:51.399 21:26:36 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:51.399 21:26:36 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:51.399 21:26:36 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:51.399 21:26:36 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:51.399 21:26:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:51.399 21:26:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:51.399 21:26:36 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:51.657 /dev/nbd1 00:04:51.657 21:26:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:51.657 21:26:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:51.657 21:26:36 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:04:51.657 21:26:36 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:51.657 21:26:36 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:51.657 21:26:36 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:51.657 21:26:36 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:04:51.657 21:26:36 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:51.657 21:26:36 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:51.657 21:26:36 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:51.657 21:26:36 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:51.657 1+0 records in 00:04:51.657 1+0 records out 00:04:51.657 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000265734 s, 15.4 MB/s 00:04:51.657 21:26:36 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:51.657 21:26:36 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:51.657 21:26:36 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:51.658 21:26:36 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:51.658 21:26:36 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:51.658 21:26:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:51.658 21:26:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:51.658 21:26:36 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:51.658 21:26:36 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:51.658 21:26:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:51.915 21:26:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:51.915 { 00:04:51.915 "nbd_device": "/dev/nbd0", 00:04:51.915 "bdev_name": "Malloc0" 00:04:51.915 }, 00:04:51.915 { 00:04:51.915 "nbd_device": "/dev/nbd1", 00:04:51.915 "bdev_name": "Malloc1" 00:04:51.916 } 00:04:51.916 ]' 00:04:51.916 21:26:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:51.916 21:26:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:51.916 { 00:04:51.916 "nbd_device": "/dev/nbd0", 00:04:51.916 "bdev_name": "Malloc0" 00:04:51.916 }, 00:04:51.916 { 00:04:51.916 "nbd_device": "/dev/nbd1", 00:04:51.916 "bdev_name": "Malloc1" 00:04:51.916 } 00:04:51.916 ]' 00:04:52.174 21:26:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:52.174 /dev/nbd1' 00:04:52.174 21:26:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:52.174 /dev/nbd1' 00:04:52.174 21:26:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:52.174 21:26:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:52.174 21:26:36 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:52.174 21:26:36 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:52.174 21:26:36 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:52.174 21:26:36 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:52.174 21:26:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:52.174 21:26:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:52.174 21:26:36 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:52.174 21:26:36 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:52.174 21:26:36 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:52.174 21:26:36 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:52.174 256+0 records in 00:04:52.174 256+0 records out 00:04:52.175 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00656019 s, 160 MB/s 00:04:52.175 21:26:36 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:52.175 21:26:36 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:52.175 256+0 records in 00:04:52.175 256+0 records out 00:04:52.175 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0260472 s, 40.3 MB/s 00:04:52.175 21:26:36 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:52.175 21:26:36 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:52.175 256+0 records in 00:04:52.175 256+0 records out 00:04:52.175 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0292086 s, 35.9 MB/s 00:04:52.175 21:26:37 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:52.175 21:26:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:52.175 21:26:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:52.175 21:26:37 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:52.175 21:26:37 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:52.175 21:26:37 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:52.175 21:26:37 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:52.175 21:26:37 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:52.175 21:26:37 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:52.175 21:26:37 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:52.175 21:26:37 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:52.175 21:26:37 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:52.175 21:26:37 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:52.175 21:26:37 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:52.175 21:26:37 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:52.175 21:26:37 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:52.175 21:26:37 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:52.175 21:26:37 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:52.175 21:26:37 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:52.433 21:26:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:52.433 21:26:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:52.433 21:26:37 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:52.433 21:26:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:52.433 21:26:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:52.433 21:26:37 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:52.433 21:26:37 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:52.433 21:26:37 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:52.433 21:26:37 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:52.433 21:26:37 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:52.692 21:26:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:52.692 21:26:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:52.692 21:26:37 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:52.692 21:26:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:52.692 21:26:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:52.692 21:26:37 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:52.692 21:26:37 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:52.692 21:26:37 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:52.692 21:26:37 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:52.692 21:26:37 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:52.692 21:26:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:52.950 21:26:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:52.950 21:26:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:52.950 21:26:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:52.950 21:26:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:52.950 21:26:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:52.950 21:26:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:52.950 21:26:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:52.950 21:26:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:52.950 21:26:37 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:52.950 21:26:37 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:52.950 21:26:37 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:52.950 21:26:37 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:52.950 21:26:37 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:53.208 21:26:38 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:53.467 [2024-07-24 21:26:38.385032] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:53.726 [2024-07-24 21:26:38.477499] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:53.726 [2024-07-24 21:26:38.477506] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.726 [2024-07-24 21:26:38.548602] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:53.726 [2024-07-24 21:26:38.548731] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:53.726 [2024-07-24 21:26:38.548744] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:56.259 21:26:41 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:56.260 spdk_app_start Round 1 00:04:56.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:56.260 21:26:41 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:56.260 21:26:41 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60142 /var/tmp/spdk-nbd.sock 00:04:56.260 21:26:41 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 60142 ']' 00:04:56.260 21:26:41 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:56.260 21:26:41 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:56.260 21:26:41 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:56.260 21:26:41 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:56.260 21:26:41 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:56.518 21:26:41 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:56.518 21:26:41 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:04:56.518 21:26:41 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:56.777 Malloc0 00:04:56.777 21:26:41 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:57.035 Malloc1 00:04:57.036 21:26:41 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:57.036 21:26:41 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:57.036 21:26:41 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:57.036 21:26:41 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:57.036 21:26:41 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:57.036 21:26:41 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:57.036 21:26:41 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:57.036 21:26:41 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:57.036 21:26:41 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:57.036 21:26:41 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:57.036 21:26:41 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:57.036 21:26:41 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:57.036 21:26:41 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:57.036 21:26:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:57.036 21:26:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:57.036 21:26:41 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:57.294 /dev/nbd0 00:04:57.294 21:26:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:57.294 21:26:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:57.294 21:26:42 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:04:57.294 21:26:42 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:57.294 21:26:42 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:57.294 21:26:42 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:57.294 21:26:42 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:04:57.294 21:26:42 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:57.294 21:26:42 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:57.295 21:26:42 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:57.295 21:26:42 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:57.295 1+0 records in 00:04:57.295 1+0 records out 00:04:57.295 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000230949 s, 17.7 MB/s 00:04:57.295 21:26:42 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:57.295 21:26:42 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:57.295 21:26:42 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:57.295 21:26:42 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:57.295 21:26:42 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:57.295 21:26:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:57.295 21:26:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:57.295 21:26:42 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:57.553 /dev/nbd1 00:04:57.553 21:26:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:57.553 21:26:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:57.553 21:26:42 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:04:57.553 21:26:42 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:57.553 21:26:42 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:57.553 21:26:42 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:57.553 21:26:42 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:04:57.553 21:26:42 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:57.553 21:26:42 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:57.553 21:26:42 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:57.553 21:26:42 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:57.553 1+0 records in 00:04:57.553 1+0 records out 00:04:57.553 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000465568 s, 8.8 MB/s 00:04:57.553 21:26:42 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:57.553 21:26:42 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:57.553 21:26:42 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:57.553 21:26:42 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:57.553 21:26:42 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:57.553 21:26:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:57.553 21:26:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:57.553 21:26:42 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:57.553 21:26:42 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:57.553 21:26:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:57.812 21:26:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:57.812 { 00:04:57.812 "nbd_device": "/dev/nbd0", 00:04:57.812 "bdev_name": "Malloc0" 00:04:57.812 }, 00:04:57.812 { 00:04:57.812 "nbd_device": "/dev/nbd1", 00:04:57.812 "bdev_name": "Malloc1" 00:04:57.812 } 00:04:57.812 ]' 00:04:57.812 21:26:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:57.812 { 00:04:57.812 "nbd_device": "/dev/nbd0", 00:04:57.812 "bdev_name": "Malloc0" 00:04:57.812 }, 00:04:57.812 { 00:04:57.812 "nbd_device": "/dev/nbd1", 00:04:57.812 "bdev_name": "Malloc1" 00:04:57.812 } 00:04:57.812 ]' 00:04:57.812 21:26:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:58.071 21:26:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:58.071 /dev/nbd1' 00:04:58.071 21:26:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:58.071 /dev/nbd1' 00:04:58.071 21:26:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:58.071 21:26:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:58.071 21:26:42 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:58.071 21:26:42 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:58.071 21:26:42 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:58.071 21:26:42 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:58.071 21:26:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:58.071 21:26:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:58.071 21:26:42 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:58.071 21:26:42 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:58.071 21:26:42 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:58.071 21:26:42 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:58.071 256+0 records in 00:04:58.071 256+0 records out 00:04:58.071 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00729159 s, 144 MB/s 00:04:58.071 21:26:42 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:58.071 21:26:42 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:58.071 256+0 records in 00:04:58.071 256+0 records out 00:04:58.071 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0266611 s, 39.3 MB/s 00:04:58.071 21:26:42 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:58.071 21:26:42 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:58.071 256+0 records in 00:04:58.071 256+0 records out 00:04:58.071 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0263265 s, 39.8 MB/s 00:04:58.071 21:26:42 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:58.071 21:26:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:58.071 21:26:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:58.071 21:26:42 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:58.071 21:26:42 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:58.071 21:26:42 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:58.071 21:26:42 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:58.071 21:26:42 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:58.071 21:26:42 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:58.071 21:26:42 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:58.071 21:26:42 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:58.071 21:26:42 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:58.071 21:26:42 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:58.071 21:26:42 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:58.071 21:26:42 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:58.071 21:26:42 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:58.071 21:26:42 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:58.071 21:26:42 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:58.071 21:26:42 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:58.330 21:26:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:58.330 21:26:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:58.330 21:26:43 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:58.330 21:26:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:58.330 21:26:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:58.330 21:26:43 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:58.330 21:26:43 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:58.330 21:26:43 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:58.330 21:26:43 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:58.330 21:26:43 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:58.587 21:26:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:58.587 21:26:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:58.587 21:26:43 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:58.587 21:26:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:58.587 21:26:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:58.587 21:26:43 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:58.587 21:26:43 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:58.587 21:26:43 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:58.587 21:26:43 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:58.587 21:26:43 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:58.587 21:26:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:58.845 21:26:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:58.845 21:26:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:58.845 21:26:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:58.845 21:26:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:58.845 21:26:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:58.845 21:26:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:58.845 21:26:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:58.845 21:26:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:58.845 21:26:43 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:58.845 21:26:43 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:58.845 21:26:43 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:58.845 21:26:43 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:58.845 21:26:43 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:59.410 21:26:44 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:59.668 [2024-07-24 21:26:44.412175] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:59.668 [2024-07-24 21:26:44.510818] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:59.668 [2024-07-24 21:26:44.510824] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.668 [2024-07-24 21:26:44.581422] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:59.668 [2024-07-24 21:26:44.581514] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:59.668 [2024-07-24 21:26:44.581527] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:02.197 spdk_app_start Round 2 00:05:02.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:02.197 21:26:47 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:02.198 21:26:47 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:02.198 21:26:47 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60142 /var/tmp/spdk-nbd.sock 00:05:02.198 21:26:47 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 60142 ']' 00:05:02.198 21:26:47 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:02.198 21:26:47 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:02.198 21:26:47 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:02.198 21:26:47 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:02.198 21:26:47 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:02.456 21:26:47 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:02.456 21:26:47 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:02.456 21:26:47 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:02.714 Malloc0 00:05:02.714 21:26:47 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:02.972 Malloc1 00:05:02.972 21:26:47 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:02.972 21:26:47 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:02.972 21:26:47 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:02.972 21:26:47 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:02.972 21:26:47 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:02.972 21:26:47 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:02.972 21:26:47 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:02.972 21:26:47 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:02.972 21:26:47 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:02.972 21:26:47 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:02.972 21:26:47 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:02.972 21:26:47 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:02.972 21:26:47 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:02.972 21:26:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:02.972 21:26:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:02.972 21:26:47 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:03.230 /dev/nbd0 00:05:03.230 21:26:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:03.230 21:26:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:03.230 21:26:48 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:03.230 21:26:48 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:03.230 21:26:48 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:03.230 21:26:48 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:03.230 21:26:48 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:03.230 21:26:48 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:03.230 21:26:48 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:03.230 21:26:48 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:03.230 21:26:48 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:03.230 1+0 records in 00:05:03.230 1+0 records out 00:05:03.230 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00046355 s, 8.8 MB/s 00:05:03.230 21:26:48 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:03.230 21:26:48 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:03.230 21:26:48 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:03.230 21:26:48 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:03.230 21:26:48 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:03.230 21:26:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:03.230 21:26:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:03.230 21:26:48 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:03.487 /dev/nbd1 00:05:03.487 21:26:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:03.487 21:26:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:03.487 21:26:48 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:03.487 21:26:48 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:03.487 21:26:48 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:03.487 21:26:48 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:03.487 21:26:48 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:03.487 21:26:48 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:03.487 21:26:48 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:03.487 21:26:48 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:03.487 21:26:48 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:03.487 1+0 records in 00:05:03.487 1+0 records out 00:05:03.487 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000243414 s, 16.8 MB/s 00:05:03.487 21:26:48 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:03.487 21:26:48 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:03.487 21:26:48 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:03.487 21:26:48 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:03.487 21:26:48 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:03.487 21:26:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:03.487 21:26:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:03.487 21:26:48 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:03.487 21:26:48 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:03.487 21:26:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:04.052 21:26:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:04.052 { 00:05:04.052 "nbd_device": "/dev/nbd0", 00:05:04.052 "bdev_name": "Malloc0" 00:05:04.052 }, 00:05:04.052 { 00:05:04.052 "nbd_device": "/dev/nbd1", 00:05:04.052 "bdev_name": "Malloc1" 00:05:04.052 } 00:05:04.052 ]' 00:05:04.052 21:26:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:04.052 { 00:05:04.052 "nbd_device": "/dev/nbd0", 00:05:04.052 "bdev_name": "Malloc0" 00:05:04.052 }, 00:05:04.052 { 00:05:04.052 "nbd_device": "/dev/nbd1", 00:05:04.052 "bdev_name": "Malloc1" 00:05:04.052 } 00:05:04.052 ]' 00:05:04.052 21:26:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:04.052 21:26:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:04.052 /dev/nbd1' 00:05:04.052 21:26:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:04.052 /dev/nbd1' 00:05:04.052 21:26:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:04.052 21:26:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:04.052 21:26:48 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:04.052 21:26:48 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:04.052 21:26:48 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:04.052 21:26:48 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:04.052 21:26:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:04.052 21:26:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:04.052 21:26:48 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:04.052 21:26:48 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:04.052 21:26:48 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:04.052 21:26:48 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:04.052 256+0 records in 00:05:04.052 256+0 records out 00:05:04.052 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00894996 s, 117 MB/s 00:05:04.052 21:26:48 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:04.052 21:26:48 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:04.052 256+0 records in 00:05:04.052 256+0 records out 00:05:04.052 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.025757 s, 40.7 MB/s 00:05:04.052 21:26:48 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:04.052 21:26:48 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:04.052 256+0 records in 00:05:04.052 256+0 records out 00:05:04.052 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0236476 s, 44.3 MB/s 00:05:04.052 21:26:48 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:04.052 21:26:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:04.052 21:26:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:04.052 21:26:48 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:04.052 21:26:48 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:04.052 21:26:48 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:04.052 21:26:48 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:04.052 21:26:48 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:04.052 21:26:48 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:04.052 21:26:48 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:04.052 21:26:48 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:04.052 21:26:48 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:04.052 21:26:48 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:04.052 21:26:48 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:04.052 21:26:48 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:04.052 21:26:48 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:04.052 21:26:48 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:04.052 21:26:48 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:04.052 21:26:48 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:04.310 21:26:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:04.310 21:26:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:04.310 21:26:49 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:04.310 21:26:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:04.310 21:26:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:04.310 21:26:49 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:04.310 21:26:49 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:04.310 21:26:49 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:04.310 21:26:49 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:04.310 21:26:49 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:04.568 21:26:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:04.568 21:26:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:04.568 21:26:49 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:04.568 21:26:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:04.568 21:26:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:04.568 21:26:49 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:04.568 21:26:49 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:04.568 21:26:49 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:04.568 21:26:49 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:04.568 21:26:49 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:04.568 21:26:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:04.827 21:26:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:04.827 21:26:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:04.827 21:26:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:04.827 21:26:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:04.827 21:26:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:04.827 21:26:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:04.827 21:26:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:04.827 21:26:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:04.827 21:26:49 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:05.085 21:26:49 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:05.085 21:26:49 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:05.085 21:26:49 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:05.085 21:26:49 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:05.343 21:26:50 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:05.601 [2024-07-24 21:26:50.377502] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:05.601 [2024-07-24 21:26:50.504439] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:05.601 [2024-07-24 21:26:50.504447] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.601 [2024-07-24 21:26:50.576563] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:05.601 [2024-07-24 21:26:50.576671] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:05.601 [2024-07-24 21:26:50.576687] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:08.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:08.184 21:26:53 event.app_repeat -- event/event.sh@38 -- # waitforlisten 60142 /var/tmp/spdk-nbd.sock 00:05:08.184 21:26:53 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 60142 ']' 00:05:08.184 21:26:53 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:08.184 21:26:53 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:08.184 21:26:53 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:08.184 21:26:53 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:08.184 21:26:53 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:08.445 21:26:53 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:08.445 21:26:53 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:08.445 21:26:53 event.app_repeat -- event/event.sh@39 -- # killprocess 60142 00:05:08.445 21:26:53 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 60142 ']' 00:05:08.445 21:26:53 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 60142 00:05:08.445 21:26:53 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:05:08.445 21:26:53 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:08.445 21:26:53 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60142 00:05:08.445 killing process with pid 60142 00:05:08.445 21:26:53 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:08.445 21:26:53 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:08.445 21:26:53 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60142' 00:05:08.445 21:26:53 event.app_repeat -- common/autotest_common.sh@969 -- # kill 60142 00:05:08.445 21:26:53 event.app_repeat -- common/autotest_common.sh@974 -- # wait 60142 00:05:08.704 spdk_app_start is called in Round 0. 00:05:08.704 Shutdown signal received, stop current app iteration 00:05:08.704 Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 reinitialization... 00:05:08.704 spdk_app_start is called in Round 1. 00:05:08.704 Shutdown signal received, stop current app iteration 00:05:08.704 Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 reinitialization... 00:05:08.704 spdk_app_start is called in Round 2. 00:05:08.704 Shutdown signal received, stop current app iteration 00:05:08.704 Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 reinitialization... 00:05:08.704 spdk_app_start is called in Round 3. 00:05:08.704 Shutdown signal received, stop current app iteration 00:05:08.704 21:26:53 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:08.704 21:26:53 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:08.704 00:05:08.704 real 0m19.156s 00:05:08.704 user 0m42.599s 00:05:08.704 sys 0m3.015s 00:05:08.704 21:26:53 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:08.704 21:26:53 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:08.704 ************************************ 00:05:08.704 END TEST app_repeat 00:05:08.704 ************************************ 00:05:08.704 21:26:53 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:08.704 21:26:53 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:08.704 21:26:53 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:08.704 21:26:53 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:08.704 21:26:53 event -- common/autotest_common.sh@10 -- # set +x 00:05:08.962 ************************************ 00:05:08.962 START TEST cpu_locks 00:05:08.962 ************************************ 00:05:08.962 21:26:53 event.cpu_locks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:08.962 * Looking for test storage... 00:05:08.962 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:08.962 21:26:53 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:08.962 21:26:53 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:08.962 21:26:53 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:08.962 21:26:53 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:08.962 21:26:53 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:08.962 21:26:53 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:08.962 21:26:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:08.962 ************************************ 00:05:08.962 START TEST default_locks 00:05:08.962 ************************************ 00:05:08.962 21:26:53 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:05:08.962 21:26:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=60575 00:05:08.962 21:26:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 60575 00:05:08.962 21:26:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:08.962 21:26:53 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 60575 ']' 00:05:08.962 21:26:53 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:08.962 21:26:53 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:08.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:08.962 21:26:53 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:08.962 21:26:53 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:08.962 21:26:53 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:08.962 [2024-07-24 21:26:53.877919] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:05:08.962 [2024-07-24 21:26:53.878039] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60575 ] 00:05:09.220 [2024-07-24 21:26:54.016894] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.220 [2024-07-24 21:26:54.111806] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.220 [2024-07-24 21:26:54.188072] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:10.155 21:26:54 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:10.155 21:26:54 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:05:10.155 21:26:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 60575 00:05:10.155 21:26:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 60575 00:05:10.156 21:26:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:10.723 21:26:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 60575 00:05:10.723 21:26:55 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 60575 ']' 00:05:10.723 21:26:55 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 60575 00:05:10.723 21:26:55 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:05:10.723 21:26:55 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:10.723 21:26:55 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60575 00:05:10.723 21:26:55 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:10.723 21:26:55 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:10.723 killing process with pid 60575 00:05:10.723 21:26:55 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60575' 00:05:10.723 21:26:55 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 60575 00:05:10.723 21:26:55 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 60575 00:05:11.289 21:26:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 60575 00:05:11.289 21:26:56 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:05:11.289 21:26:56 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 60575 00:05:11.289 21:26:56 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:11.289 21:26:56 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:11.289 21:26:56 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:11.289 21:26:56 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:11.289 21:26:56 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 60575 00:05:11.289 21:26:56 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 60575 ']' 00:05:11.289 21:26:56 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:11.289 21:26:56 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:11.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:11.289 21:26:56 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:11.289 21:26:56 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:11.289 21:26:56 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:11.289 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (60575) - No such process 00:05:11.289 ERROR: process (pid: 60575) is no longer running 00:05:11.289 21:26:56 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:11.289 21:26:56 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:05:11.289 21:26:56 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:05:11.289 21:26:56 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:11.289 21:26:56 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:11.289 21:26:56 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:11.289 21:26:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:11.289 21:26:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:11.289 21:26:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:11.289 21:26:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:11.289 00:05:11.289 real 0m2.259s 00:05:11.289 user 0m2.350s 00:05:11.289 sys 0m0.701s 00:05:11.289 21:26:56 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:11.289 21:26:56 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:11.289 ************************************ 00:05:11.289 END TEST default_locks 00:05:11.289 ************************************ 00:05:11.289 21:26:56 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:11.289 21:26:56 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:11.289 21:26:56 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:11.289 21:26:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:11.289 ************************************ 00:05:11.289 START TEST default_locks_via_rpc 00:05:11.289 ************************************ 00:05:11.289 21:26:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:05:11.289 21:26:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=60627 00:05:11.289 21:26:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 60627 00:05:11.289 21:26:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:11.289 21:26:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 60627 ']' 00:05:11.289 21:26:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:11.289 21:26:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:11.289 21:26:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:11.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:11.289 21:26:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:11.289 21:26:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.289 [2024-07-24 21:26:56.192047] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:05:11.289 [2024-07-24 21:26:56.192166] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60627 ] 00:05:11.546 [2024-07-24 21:26:56.333222] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.546 [2024-07-24 21:26:56.502092] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.805 [2024-07-24 21:26:56.579541] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:12.372 21:26:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:12.372 21:26:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:12.372 21:26:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:12.372 21:26:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:12.372 21:26:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.372 21:26:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:12.372 21:26:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:12.372 21:26:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:12.372 21:26:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:12.372 21:26:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:12.372 21:26:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:12.372 21:26:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:12.372 21:26:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.372 21:26:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:12.372 21:26:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 60627 00:05:12.372 21:26:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 60627 00:05:12.372 21:26:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:12.939 21:26:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 60627 00:05:12.939 21:26:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 60627 ']' 00:05:12.939 21:26:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 60627 00:05:12.939 21:26:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:05:12.939 21:26:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:12.939 21:26:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60627 00:05:12.939 21:26:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:12.939 21:26:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:12.939 killing process with pid 60627 00:05:12.939 21:26:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60627' 00:05:12.939 21:26:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 60627 00:05:12.939 21:26:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 60627 00:05:13.506 00:05:13.506 real 0m2.135s 00:05:13.506 user 0m2.228s 00:05:13.506 sys 0m0.671s 00:05:13.506 21:26:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:13.506 21:26:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.506 ************************************ 00:05:13.506 END TEST default_locks_via_rpc 00:05:13.506 ************************************ 00:05:13.506 21:26:58 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:13.506 21:26:58 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:13.506 21:26:58 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:13.506 21:26:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:13.506 ************************************ 00:05:13.506 START TEST non_locking_app_on_locked_coremask 00:05:13.506 ************************************ 00:05:13.506 21:26:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:05:13.506 21:26:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=60678 00:05:13.506 21:26:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 60678 /var/tmp/spdk.sock 00:05:13.506 21:26:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:13.506 21:26:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 60678 ']' 00:05:13.506 21:26:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:13.506 21:26:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:13.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:13.506 21:26:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:13.506 21:26:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:13.506 21:26:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:13.506 [2024-07-24 21:26:58.388683] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:05:13.506 [2024-07-24 21:26:58.388794] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60678 ] 00:05:13.764 [2024-07-24 21:26:58.529947] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.764 [2024-07-24 21:26:58.701486] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.023 [2024-07-24 21:26:58.785308] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:14.591 21:26:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:14.591 21:26:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:14.591 21:26:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:14.591 21:26:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=60700 00:05:14.591 21:26:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 60700 /var/tmp/spdk2.sock 00:05:14.591 21:26:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 60700 ']' 00:05:14.591 21:26:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:14.591 21:26:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:14.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:14.591 21:26:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:14.591 21:26:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:14.591 21:26:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:14.591 [2024-07-24 21:26:59.490602] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:05:14.591 [2024-07-24 21:26:59.490761] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60700 ] 00:05:14.850 [2024-07-24 21:26:59.637142] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:14.850 [2024-07-24 21:26:59.637241] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.109 [2024-07-24 21:26:59.935772] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.109 [2024-07-24 21:27:00.089941] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:15.677 21:27:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:15.677 21:27:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:15.677 21:27:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 60678 00:05:15.677 21:27:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:15.677 21:27:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60678 00:05:16.613 21:27:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 60678 00:05:16.613 21:27:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 60678 ']' 00:05:16.613 21:27:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 60678 00:05:16.613 21:27:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:16.613 21:27:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:16.613 21:27:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60678 00:05:16.613 21:27:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:16.613 21:27:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:16.613 killing process with pid 60678 00:05:16.613 21:27:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60678' 00:05:16.613 21:27:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 60678 00:05:16.613 21:27:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 60678 00:05:17.986 21:27:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 60700 00:05:17.986 21:27:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 60700 ']' 00:05:17.986 21:27:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 60700 00:05:17.986 21:27:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:17.986 21:27:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:17.986 21:27:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60700 00:05:17.986 21:27:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:17.986 21:27:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:17.986 killing process with pid 60700 00:05:17.986 21:27:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60700' 00:05:17.986 21:27:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 60700 00:05:17.986 21:27:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 60700 00:05:18.244 00:05:18.244 real 0m4.853s 00:05:18.244 user 0m5.268s 00:05:18.244 sys 0m1.411s 00:05:18.244 21:27:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:18.244 21:27:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:18.244 ************************************ 00:05:18.244 END TEST non_locking_app_on_locked_coremask 00:05:18.244 ************************************ 00:05:18.244 21:27:03 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:18.244 21:27:03 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:18.244 21:27:03 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:18.244 21:27:03 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:18.244 ************************************ 00:05:18.244 START TEST locking_app_on_unlocked_coremask 00:05:18.244 ************************************ 00:05:18.244 21:27:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:05:18.244 21:27:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=60772 00:05:18.244 21:27:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:18.244 21:27:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 60772 /var/tmp/spdk.sock 00:05:18.244 21:27:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 60772 ']' 00:05:18.244 21:27:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:18.244 21:27:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:18.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:18.244 21:27:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:18.244 21:27:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:18.244 21:27:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:18.502 [2024-07-24 21:27:03.269225] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:05:18.502 [2024-07-24 21:27:03.269347] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60772 ] 00:05:18.502 [2024-07-24 21:27:03.401746] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:18.502 [2024-07-24 21:27:03.401826] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.759 [2024-07-24 21:27:03.566774] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.759 [2024-07-24 21:27:03.639714] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:19.323 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:19.323 21:27:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:19.323 21:27:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:19.323 21:27:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=60788 00:05:19.323 21:27:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:19.323 21:27:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 60788 /var/tmp/spdk2.sock 00:05:19.323 21:27:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 60788 ']' 00:05:19.323 21:27:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:19.323 21:27:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:19.323 21:27:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:19.323 21:27:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:19.324 21:27:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:19.581 [2024-07-24 21:27:04.379779] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:05:19.581 [2024-07-24 21:27:04.379921] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60788 ] 00:05:19.581 [2024-07-24 21:27:04.529458] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.838 [2024-07-24 21:27:04.822140] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.112 [2024-07-24 21:27:04.964148] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:20.678 21:27:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:20.678 21:27:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:20.678 21:27:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 60788 00:05:20.678 21:27:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60788 00:05:20.678 21:27:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:21.611 21:27:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 60772 00:05:21.611 21:27:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 60772 ']' 00:05:21.611 21:27:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 60772 00:05:21.611 21:27:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:21.611 21:27:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:21.611 21:27:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60772 00:05:21.611 killing process with pid 60772 00:05:21.611 21:27:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:21.611 21:27:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:21.611 21:27:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60772' 00:05:21.611 21:27:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 60772 00:05:21.611 21:27:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 60772 00:05:22.544 21:27:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 60788 00:05:22.544 21:27:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 60788 ']' 00:05:22.544 21:27:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 60788 00:05:22.544 21:27:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:22.544 21:27:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:22.544 21:27:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60788 00:05:22.544 killing process with pid 60788 00:05:22.544 21:27:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:22.544 21:27:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:22.544 21:27:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60788' 00:05:22.544 21:27:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 60788 00:05:22.544 21:27:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 60788 00:05:23.111 00:05:23.111 real 0m4.741s 00:05:23.111 user 0m5.214s 00:05:23.111 sys 0m1.286s 00:05:23.111 21:27:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:23.111 21:27:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:23.111 ************************************ 00:05:23.111 END TEST locking_app_on_unlocked_coremask 00:05:23.111 ************************************ 00:05:23.111 21:27:07 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:23.111 21:27:07 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:23.111 21:27:07 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:23.111 21:27:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:23.111 ************************************ 00:05:23.111 START TEST locking_app_on_locked_coremask 00:05:23.111 ************************************ 00:05:23.111 21:27:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:05:23.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:23.111 21:27:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=60866 00:05:23.111 21:27:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 60866 /var/tmp/spdk.sock 00:05:23.111 21:27:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:23.111 21:27:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 60866 ']' 00:05:23.111 21:27:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:23.111 21:27:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:23.111 21:27:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:23.111 21:27:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:23.111 21:27:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:23.111 [2024-07-24 21:27:08.059088] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:05:23.111 [2024-07-24 21:27:08.059197] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60866 ] 00:05:23.370 [2024-07-24 21:27:08.193625] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.370 [2024-07-24 21:27:08.312378] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.629 [2024-07-24 21:27:08.384391] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:24.196 21:27:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:24.196 21:27:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:24.196 21:27:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=60882 00:05:24.196 21:27:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:24.196 21:27:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 60882 /var/tmp/spdk2.sock 00:05:24.196 21:27:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:05:24.196 21:27:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 60882 /var/tmp/spdk2.sock 00:05:24.196 21:27:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:24.196 21:27:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:24.196 21:27:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:24.196 21:27:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:24.196 21:27:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 60882 /var/tmp/spdk2.sock 00:05:24.197 21:27:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 60882 ']' 00:05:24.197 21:27:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:24.197 21:27:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:24.197 21:27:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:24.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:24.197 21:27:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:24.197 21:27:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:24.197 [2024-07-24 21:27:09.005007] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:05:24.197 [2024-07-24 21:27:09.005648] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60882 ] 00:05:24.197 [2024-07-24 21:27:09.156776] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 60866 has claimed it. 00:05:24.197 [2024-07-24 21:27:09.156858] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:24.764 ERROR: process (pid: 60882) is no longer running 00:05:24.764 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (60882) - No such process 00:05:24.764 21:27:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:24.764 21:27:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:05:24.764 21:27:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:05:24.764 21:27:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:24.764 21:27:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:24.764 21:27:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:24.764 21:27:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 60866 00:05:24.764 21:27:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60866 00:05:24.764 21:27:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:25.331 21:27:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 60866 00:05:25.331 21:27:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 60866 ']' 00:05:25.331 21:27:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 60866 00:05:25.331 21:27:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:25.331 21:27:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:25.331 21:27:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60866 00:05:25.331 killing process with pid 60866 00:05:25.331 21:27:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:25.331 21:27:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:25.331 21:27:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60866' 00:05:25.331 21:27:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 60866 00:05:25.331 21:27:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 60866 00:05:25.899 00:05:25.899 real 0m2.681s 00:05:25.899 user 0m2.905s 00:05:25.899 sys 0m0.744s 00:05:25.899 21:27:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:25.899 ************************************ 00:05:25.899 END TEST locking_app_on_locked_coremask 00:05:25.899 ************************************ 00:05:25.899 21:27:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:25.899 21:27:10 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:25.899 21:27:10 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:25.899 21:27:10 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:25.899 21:27:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:25.899 ************************************ 00:05:25.899 START TEST locking_overlapped_coremask 00:05:25.899 ************************************ 00:05:25.899 21:27:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:05:25.899 21:27:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=60928 00:05:25.899 21:27:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 60928 /var/tmp/spdk.sock 00:05:25.899 21:27:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:05:25.899 21:27:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 60928 ']' 00:05:25.899 21:27:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:25.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:25.899 21:27:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:25.899 21:27:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:25.899 21:27:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:25.899 21:27:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:25.899 [2024-07-24 21:27:10.793101] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:05:25.899 [2024-07-24 21:27:10.793191] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60928 ] 00:05:26.157 [2024-07-24 21:27:10.924535] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:26.157 [2024-07-24 21:27:11.045665] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.157 [2024-07-24 21:27:11.045517] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:26.157 [2024-07-24 21:27:11.045665] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:26.157 [2024-07-24 21:27:11.117466] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:27.090 21:27:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:27.090 21:27:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:27.090 21:27:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=60946 00:05:27.090 21:27:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:27.090 21:27:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 60946 /var/tmp/spdk2.sock 00:05:27.090 21:27:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:05:27.090 21:27:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 60946 /var/tmp/spdk2.sock 00:05:27.090 21:27:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:27.090 21:27:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:27.090 21:27:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:27.090 21:27:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:27.090 21:27:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 60946 /var/tmp/spdk2.sock 00:05:27.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:27.090 21:27:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 60946 ']' 00:05:27.090 21:27:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:27.090 21:27:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:27.090 21:27:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:27.090 21:27:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:27.090 21:27:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:27.090 [2024-07-24 21:27:11.873718] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:05:27.090 [2024-07-24 21:27:11.873825] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60946 ] 00:05:27.090 [2024-07-24 21:27:12.027282] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60928 has claimed it. 00:05:27.090 [2024-07-24 21:27:12.027354] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:27.657 ERROR: process (pid: 60946) is no longer running 00:05:27.657 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (60946) - No such process 00:05:27.657 21:27:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:27.657 21:27:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:05:27.657 21:27:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:05:27.657 21:27:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:27.657 21:27:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:27.657 21:27:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:27.657 21:27:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:27.657 21:27:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:27.657 21:27:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:27.657 21:27:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:27.657 21:27:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 60928 00:05:27.657 21:27:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 60928 ']' 00:05:27.657 21:27:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 60928 00:05:27.657 21:27:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:05:27.657 21:27:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:27.657 21:27:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60928 00:05:27.657 21:27:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:27.657 killing process with pid 60928 00:05:27.657 21:27:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:27.657 21:27:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60928' 00:05:27.657 21:27:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 60928 00:05:27.657 21:27:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 60928 00:05:28.228 00:05:28.228 real 0m2.451s 00:05:28.228 user 0m6.792s 00:05:28.228 sys 0m0.536s 00:05:28.228 21:27:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:28.228 ************************************ 00:05:28.228 END TEST locking_overlapped_coremask 00:05:28.228 ************************************ 00:05:28.228 21:27:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:28.228 21:27:13 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:28.228 21:27:13 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:28.228 21:27:13 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:28.228 21:27:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:28.492 ************************************ 00:05:28.492 START TEST locking_overlapped_coremask_via_rpc 00:05:28.492 ************************************ 00:05:28.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:28.492 21:27:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:05:28.492 21:27:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=60991 00:05:28.493 21:27:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:28.493 21:27:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 60991 /var/tmp/spdk.sock 00:05:28.493 21:27:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 60991 ']' 00:05:28.493 21:27:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:28.493 21:27:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:28.493 21:27:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:28.493 21:27:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:28.493 21:27:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:28.493 [2024-07-24 21:27:13.290594] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:05:28.493 [2024-07-24 21:27:13.290694] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60991 ] 00:05:28.493 [2024-07-24 21:27:13.426915] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:28.493 [2024-07-24 21:27:13.426968] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:28.752 [2024-07-24 21:27:13.554919] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:28.752 [2024-07-24 21:27:13.555059] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:28.752 [2024-07-24 21:27:13.555061] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.752 [2024-07-24 21:27:13.615231] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:29.320 21:27:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:29.320 21:27:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:29.320 21:27:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=61009 00:05:29.320 21:27:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 61009 /var/tmp/spdk2.sock 00:05:29.320 21:27:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:29.320 21:27:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 61009 ']' 00:05:29.320 21:27:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:29.320 21:27:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:29.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:29.320 21:27:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:29.320 21:27:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:29.320 21:27:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.579 [2024-07-24 21:27:14.364480] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:05:29.579 [2024-07-24 21:27:14.364800] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61009 ] 00:05:29.579 [2024-07-24 21:27:14.512319] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:29.579 [2024-07-24 21:27:14.512397] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:29.839 [2024-07-24 21:27:14.741937] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:29.839 [2024-07-24 21:27:14.742058] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:05:29.839 [2024-07-24 21:27:14.742063] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:30.098 [2024-07-24 21:27:14.886160] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:30.356 21:27:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:30.356 21:27:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:30.356 21:27:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:30.356 21:27:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:30.356 21:27:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.356 21:27:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:30.356 21:27:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:30.356 21:27:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:30.356 21:27:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:30.356 21:27:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:30.356 21:27:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:30.356 21:27:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:30.356 21:27:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:30.357 21:27:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:30.357 21:27:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:30.357 21:27:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.616 [2024-07-24 21:27:15.360806] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60991 has claimed it. 00:05:30.616 request: 00:05:30.616 { 00:05:30.616 "method": "framework_enable_cpumask_locks", 00:05:30.616 "req_id": 1 00:05:30.616 } 00:05:30.616 Got JSON-RPC error response 00:05:30.616 response: 00:05:30.616 { 00:05:30.616 "code": -32603, 00:05:30.616 "message": "Failed to claim CPU core: 2" 00:05:30.616 } 00:05:30.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:30.616 21:27:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:30.616 21:27:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:30.616 21:27:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:30.616 21:27:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:30.616 21:27:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:30.616 21:27:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 60991 /var/tmp/spdk.sock 00:05:30.616 21:27:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 60991 ']' 00:05:30.616 21:27:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:30.616 21:27:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:30.616 21:27:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:30.616 21:27:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:30.616 21:27:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.616 21:27:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:30.616 21:27:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:30.616 21:27:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 61009 /var/tmp/spdk2.sock 00:05:30.616 21:27:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 61009 ']' 00:05:30.616 21:27:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:30.616 21:27:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:30.616 21:27:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:30.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:30.616 21:27:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:30.616 21:27:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.875 21:27:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:30.875 21:27:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:30.875 21:27:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:30.875 ************************************ 00:05:30.875 END TEST locking_overlapped_coremask_via_rpc 00:05:30.875 ************************************ 00:05:30.875 21:27:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:30.875 21:27:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:30.875 21:27:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:30.875 00:05:30.875 real 0m2.639s 00:05:30.875 user 0m1.352s 00:05:30.875 sys 0m0.184s 00:05:30.875 21:27:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:30.875 21:27:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.134 21:27:15 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:31.134 21:27:15 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60991 ]] 00:05:31.134 21:27:15 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60991 00:05:31.134 21:27:15 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 60991 ']' 00:05:31.134 21:27:15 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 60991 00:05:31.134 21:27:15 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:05:31.134 21:27:15 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:31.134 21:27:15 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60991 00:05:31.134 21:27:15 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:31.134 killing process with pid 60991 00:05:31.134 21:27:15 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:31.134 21:27:15 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60991' 00:05:31.134 21:27:15 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 60991 00:05:31.134 21:27:15 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 60991 00:05:31.393 21:27:16 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 61009 ]] 00:05:31.393 21:27:16 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 61009 00:05:31.393 21:27:16 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 61009 ']' 00:05:31.393 21:27:16 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 61009 00:05:31.393 21:27:16 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:05:31.393 21:27:16 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:31.393 21:27:16 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61009 00:05:31.652 killing process with pid 61009 00:05:31.652 21:27:16 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:05:31.652 21:27:16 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:05:31.652 21:27:16 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61009' 00:05:31.652 21:27:16 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 61009 00:05:31.652 21:27:16 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 61009 00:05:32.220 21:27:16 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:32.220 21:27:16 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:32.220 21:27:16 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60991 ]] 00:05:32.220 21:27:16 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60991 00:05:32.220 21:27:16 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 60991 ']' 00:05:32.220 Process with pid 60991 is not found 00:05:32.220 21:27:16 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 60991 00:05:32.220 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (60991) - No such process 00:05:32.220 21:27:16 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 60991 is not found' 00:05:32.220 21:27:16 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 61009 ]] 00:05:32.220 21:27:16 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 61009 00:05:32.220 Process with pid 61009 is not found 00:05:32.220 21:27:16 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 61009 ']' 00:05:32.220 21:27:16 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 61009 00:05:32.220 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (61009) - No such process 00:05:32.220 21:27:16 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 61009 is not found' 00:05:32.220 21:27:16 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:32.220 00:05:32.220 real 0m23.243s 00:05:32.220 user 0m38.818s 00:05:32.220 sys 0m6.525s 00:05:32.220 21:27:16 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:32.220 21:27:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:32.220 ************************************ 00:05:32.220 END TEST cpu_locks 00:05:32.220 ************************************ 00:05:32.220 ************************************ 00:05:32.220 END TEST event 00:05:32.220 ************************************ 00:05:32.220 00:05:32.220 real 0m51.529s 00:05:32.220 user 1m36.843s 00:05:32.220 sys 0m10.347s 00:05:32.220 21:27:16 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:32.220 21:27:16 event -- common/autotest_common.sh@10 -- # set +x 00:05:32.220 21:27:17 -- spdk/autotest.sh@182 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:32.220 21:27:17 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:32.220 21:27:17 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:32.220 21:27:17 -- common/autotest_common.sh@10 -- # set +x 00:05:32.220 ************************************ 00:05:32.220 START TEST thread 00:05:32.220 ************************************ 00:05:32.220 21:27:17 thread -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:32.220 * Looking for test storage... 00:05:32.220 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:05:32.220 21:27:17 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:32.220 21:27:17 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:05:32.220 21:27:17 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:32.220 21:27:17 thread -- common/autotest_common.sh@10 -- # set +x 00:05:32.220 ************************************ 00:05:32.220 START TEST thread_poller_perf 00:05:32.220 ************************************ 00:05:32.220 21:27:17 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:32.220 [2024-07-24 21:27:17.161188] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:05:32.220 [2024-07-24 21:27:17.161303] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61132 ] 00:05:32.479 [2024-07-24 21:27:17.298236] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.479 [2024-07-24 21:27:17.420006] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.479 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:33.855 ====================================== 00:05:33.855 busy:2206669943 (cyc) 00:05:33.855 total_run_count: 312000 00:05:33.855 tsc_hz: 2200000000 (cyc) 00:05:33.855 ====================================== 00:05:33.855 poller_cost: 7072 (cyc), 3214 (nsec) 00:05:33.855 00:05:33.855 real 0m1.421s 00:05:33.855 user 0m1.249s 00:05:33.855 sys 0m0.064s 00:05:33.855 21:27:18 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:33.855 21:27:18 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:33.855 ************************************ 00:05:33.855 END TEST thread_poller_perf 00:05:33.855 ************************************ 00:05:33.855 21:27:18 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:33.855 21:27:18 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:05:33.855 21:27:18 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:33.855 21:27:18 thread -- common/autotest_common.sh@10 -- # set +x 00:05:33.855 ************************************ 00:05:33.855 START TEST thread_poller_perf 00:05:33.855 ************************************ 00:05:33.855 21:27:18 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:33.855 [2024-07-24 21:27:18.638950] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:05:33.855 [2024-07-24 21:27:18.639070] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61167 ] 00:05:33.855 [2024-07-24 21:27:18.776805] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.134 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:34.134 [2024-07-24 21:27:18.931927] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.068 ====================================== 00:05:35.068 busy:2202309312 (cyc) 00:05:35.068 total_run_count: 4412000 00:05:35.068 tsc_hz: 2200000000 (cyc) 00:05:35.068 ====================================== 00:05:35.068 poller_cost: 499 (cyc), 226 (nsec) 00:05:35.068 00:05:35.068 real 0m1.416s 00:05:35.068 user 0m1.237s 00:05:35.068 sys 0m0.070s 00:05:35.068 21:27:20 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:35.068 21:27:20 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:35.068 ************************************ 00:05:35.068 END TEST thread_poller_perf 00:05:35.068 ************************************ 00:05:35.326 21:27:20 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:35.326 ************************************ 00:05:35.326 END TEST thread 00:05:35.326 ************************************ 00:05:35.326 00:05:35.326 real 0m3.037s 00:05:35.326 user 0m2.549s 00:05:35.326 sys 0m0.261s 00:05:35.326 21:27:20 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:35.326 21:27:20 thread -- common/autotest_common.sh@10 -- # set +x 00:05:35.326 21:27:20 -- spdk/autotest.sh@184 -- # [[ 0 -eq 1 ]] 00:05:35.326 21:27:20 -- spdk/autotest.sh@189 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:35.326 21:27:20 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:35.326 21:27:20 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:35.326 21:27:20 -- common/autotest_common.sh@10 -- # set +x 00:05:35.326 ************************************ 00:05:35.326 START TEST app_cmdline 00:05:35.326 ************************************ 00:05:35.326 21:27:20 app_cmdline -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:35.326 * Looking for test storage... 00:05:35.326 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:35.326 21:27:20 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:35.326 21:27:20 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=61242 00:05:35.326 21:27:20 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:35.326 21:27:20 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 61242 00:05:35.326 21:27:20 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 61242 ']' 00:05:35.326 21:27:20 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:35.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:35.326 21:27:20 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:35.326 21:27:20 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:35.326 21:27:20 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:35.326 21:27:20 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:35.326 [2024-07-24 21:27:20.258296] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:05:35.326 [2024-07-24 21:27:20.258672] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61242 ] 00:05:35.584 [2024-07-24 21:27:20.398347] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.584 [2024-07-24 21:27:20.566817] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.842 [2024-07-24 21:27:20.641190] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:36.409 21:27:21 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:36.409 21:27:21 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:05:36.409 21:27:21 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:05:36.668 { 00:05:36.668 "version": "SPDK v24.09-pre git sha1 68f798423", 00:05:36.668 "fields": { 00:05:36.668 "major": 24, 00:05:36.668 "minor": 9, 00:05:36.668 "patch": 0, 00:05:36.668 "suffix": "-pre", 00:05:36.668 "commit": "68f798423" 00:05:36.668 } 00:05:36.668 } 00:05:36.668 21:27:21 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:36.668 21:27:21 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:36.668 21:27:21 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:36.668 21:27:21 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:36.668 21:27:21 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:36.668 21:27:21 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:36.668 21:27:21 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:36.668 21:27:21 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:36.668 21:27:21 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:36.668 21:27:21 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:36.668 21:27:21 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:36.668 21:27:21 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:36.668 21:27:21 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:36.668 21:27:21 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:05:36.668 21:27:21 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:36.668 21:27:21 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:36.668 21:27:21 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:36.668 21:27:21 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:36.668 21:27:21 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:36.668 21:27:21 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:36.668 21:27:21 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:36.668 21:27:21 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:36.668 21:27:21 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:05:36.668 21:27:21 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:37.235 request: 00:05:37.235 { 00:05:37.235 "method": "env_dpdk_get_mem_stats", 00:05:37.235 "req_id": 1 00:05:37.235 } 00:05:37.235 Got JSON-RPC error response 00:05:37.235 response: 00:05:37.235 { 00:05:37.235 "code": -32601, 00:05:37.235 "message": "Method not found" 00:05:37.235 } 00:05:37.235 21:27:21 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:05:37.235 21:27:21 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:37.235 21:27:21 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:37.235 21:27:21 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:37.235 21:27:21 app_cmdline -- app/cmdline.sh@1 -- # killprocess 61242 00:05:37.235 21:27:21 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 61242 ']' 00:05:37.235 21:27:21 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 61242 00:05:37.235 21:27:21 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:05:37.235 21:27:21 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:37.235 21:27:21 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61242 00:05:37.235 killing process with pid 61242 00:05:37.235 21:27:21 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:37.235 21:27:21 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:37.235 21:27:21 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61242' 00:05:37.235 21:27:21 app_cmdline -- common/autotest_common.sh@969 -- # kill 61242 00:05:37.235 21:27:21 app_cmdline -- common/autotest_common.sh@974 -- # wait 61242 00:05:37.803 ************************************ 00:05:37.803 END TEST app_cmdline 00:05:37.803 ************************************ 00:05:37.803 00:05:37.803 real 0m2.374s 00:05:37.803 user 0m2.973s 00:05:37.803 sys 0m0.548s 00:05:37.803 21:27:22 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:37.803 21:27:22 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:37.803 21:27:22 -- spdk/autotest.sh@190 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:37.803 21:27:22 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:37.803 21:27:22 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:37.803 21:27:22 -- common/autotest_common.sh@10 -- # set +x 00:05:37.803 ************************************ 00:05:37.803 START TEST version 00:05:37.803 ************************************ 00:05:37.803 21:27:22 version -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:37.803 * Looking for test storage... 00:05:37.803 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:37.803 21:27:22 version -- app/version.sh@17 -- # get_header_version major 00:05:37.803 21:27:22 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:37.803 21:27:22 version -- app/version.sh@14 -- # cut -f2 00:05:37.803 21:27:22 version -- app/version.sh@14 -- # tr -d '"' 00:05:37.803 21:27:22 version -- app/version.sh@17 -- # major=24 00:05:37.803 21:27:22 version -- app/version.sh@18 -- # get_header_version minor 00:05:37.803 21:27:22 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:37.803 21:27:22 version -- app/version.sh@14 -- # tr -d '"' 00:05:37.803 21:27:22 version -- app/version.sh@14 -- # cut -f2 00:05:37.803 21:27:22 version -- app/version.sh@18 -- # minor=9 00:05:37.803 21:27:22 version -- app/version.sh@19 -- # get_header_version patch 00:05:37.803 21:27:22 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:37.803 21:27:22 version -- app/version.sh@14 -- # cut -f2 00:05:37.803 21:27:22 version -- app/version.sh@14 -- # tr -d '"' 00:05:37.803 21:27:22 version -- app/version.sh@19 -- # patch=0 00:05:37.803 21:27:22 version -- app/version.sh@20 -- # get_header_version suffix 00:05:37.803 21:27:22 version -- app/version.sh@14 -- # cut -f2 00:05:37.803 21:27:22 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:37.803 21:27:22 version -- app/version.sh@14 -- # tr -d '"' 00:05:37.803 21:27:22 version -- app/version.sh@20 -- # suffix=-pre 00:05:37.803 21:27:22 version -- app/version.sh@22 -- # version=24.9 00:05:37.803 21:27:22 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:37.803 21:27:22 version -- app/version.sh@28 -- # version=24.9rc0 00:05:37.803 21:27:22 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:05:37.804 21:27:22 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:37.804 21:27:22 version -- app/version.sh@30 -- # py_version=24.9rc0 00:05:37.804 21:27:22 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:05:37.804 00:05:37.804 real 0m0.166s 00:05:37.804 user 0m0.087s 00:05:37.804 sys 0m0.111s 00:05:37.804 ************************************ 00:05:37.804 END TEST version 00:05:37.804 ************************************ 00:05:37.804 21:27:22 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:37.804 21:27:22 version -- common/autotest_common.sh@10 -- # set +x 00:05:37.804 21:27:22 -- spdk/autotest.sh@192 -- # '[' 0 -eq 1 ']' 00:05:37.804 21:27:22 -- spdk/autotest.sh@202 -- # uname -s 00:05:37.804 21:27:22 -- spdk/autotest.sh@202 -- # [[ Linux == Linux ]] 00:05:37.804 21:27:22 -- spdk/autotest.sh@203 -- # [[ 0 -eq 1 ]] 00:05:37.804 21:27:22 -- spdk/autotest.sh@203 -- # [[ 1 -eq 1 ]] 00:05:37.804 21:27:22 -- spdk/autotest.sh@209 -- # [[ 0 -eq 0 ]] 00:05:37.804 21:27:22 -- spdk/autotest.sh@210 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:05:37.804 21:27:22 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:37.804 21:27:22 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:37.804 21:27:22 -- common/autotest_common.sh@10 -- # set +x 00:05:37.804 ************************************ 00:05:37.804 START TEST spdk_dd 00:05:37.804 ************************************ 00:05:37.804 21:27:22 spdk_dd -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:05:38.063 * Looking for test storage... 00:05:38.063 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:05:38.063 21:27:22 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:38.063 21:27:22 spdk_dd -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:38.063 21:27:22 spdk_dd -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:38.063 21:27:22 spdk_dd -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:38.063 21:27:22 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:38.063 21:27:22 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:38.063 21:27:22 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:38.063 21:27:22 spdk_dd -- paths/export.sh@5 -- # export PATH 00:05:38.063 21:27:22 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:38.063 21:27:22 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:38.322 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:38.322 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:38.322 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:38.322 21:27:23 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:05:38.322 21:27:23 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:05:38.322 21:27:23 spdk_dd -- scripts/common.sh@309 -- # local bdf bdfs 00:05:38.322 21:27:23 spdk_dd -- scripts/common.sh@310 -- # local nvmes 00:05:38.322 21:27:23 spdk_dd -- scripts/common.sh@312 -- # [[ -n '' ]] 00:05:38.322 21:27:23 spdk_dd -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:05:38.322 21:27:23 spdk_dd -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:05:38.322 21:27:23 spdk_dd -- scripts/common.sh@295 -- # local bdf= 00:05:38.322 21:27:23 spdk_dd -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:05:38.322 21:27:23 spdk_dd -- scripts/common.sh@230 -- # local class 00:05:38.322 21:27:23 spdk_dd -- scripts/common.sh@231 -- # local subclass 00:05:38.322 21:27:23 spdk_dd -- scripts/common.sh@232 -- # local progif 00:05:38.322 21:27:23 spdk_dd -- scripts/common.sh@233 -- # printf %02x 1 00:05:38.322 21:27:23 spdk_dd -- scripts/common.sh@233 -- # class=01 00:05:38.322 21:27:23 spdk_dd -- scripts/common.sh@234 -- # printf %02x 8 00:05:38.322 21:27:23 spdk_dd -- scripts/common.sh@234 -- # subclass=08 00:05:38.322 21:27:23 spdk_dd -- scripts/common.sh@235 -- # printf %02x 2 00:05:38.322 21:27:23 spdk_dd -- scripts/common.sh@235 -- # progif=02 00:05:38.322 21:27:23 spdk_dd -- scripts/common.sh@237 -- # hash lspci 00:05:38.322 21:27:23 spdk_dd -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:05:38.322 21:27:23 spdk_dd -- scripts/common.sh@239 -- # lspci -mm -n -D 00:05:38.322 21:27:23 spdk_dd -- scripts/common.sh@240 -- # grep -i -- -p02 00:05:38.322 21:27:23 spdk_dd -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:05:38.322 21:27:23 spdk_dd -- scripts/common.sh@242 -- # tr -d '"' 00:05:38.322 21:27:23 spdk_dd -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:05:38.322 21:27:23 spdk_dd -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:05:38.322 21:27:23 spdk_dd -- scripts/common.sh@15 -- # local i 00:05:38.322 21:27:23 spdk_dd -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:05:38.322 21:27:23 spdk_dd -- scripts/common.sh@22 -- # [[ -z '' ]] 00:05:38.322 21:27:23 spdk_dd -- scripts/common.sh@24 -- # return 0 00:05:38.322 21:27:23 spdk_dd -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:05:38.322 21:27:23 spdk_dd -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:05:38.322 21:27:23 spdk_dd -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:05:38.322 21:27:23 spdk_dd -- scripts/common.sh@15 -- # local i 00:05:38.322 21:27:23 spdk_dd -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:05:38.323 21:27:23 spdk_dd -- scripts/common.sh@22 -- # [[ -z '' ]] 00:05:38.323 21:27:23 spdk_dd -- scripts/common.sh@24 -- # return 0 00:05:38.323 21:27:23 spdk_dd -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:05:38.323 21:27:23 spdk_dd -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:05:38.323 21:27:23 spdk_dd -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:05:38.584 21:27:23 spdk_dd -- scripts/common.sh@320 -- # uname -s 00:05:38.584 21:27:23 spdk_dd -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:05:38.584 21:27:23 spdk_dd -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:05:38.584 21:27:23 spdk_dd -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:05:38.584 21:27:23 spdk_dd -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:05:38.584 21:27:23 spdk_dd -- scripts/common.sh@320 -- # uname -s 00:05:38.584 21:27:23 spdk_dd -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:05:38.584 21:27:23 spdk_dd -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:05:38.584 21:27:23 spdk_dd -- scripts/common.sh@325 -- # (( 2 )) 00:05:38.584 21:27:23 spdk_dd -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:38.584 21:27:23 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:05:38.584 21:27:23 spdk_dd -- dd/common.sh@139 -- # local lib 00:05:38.584 21:27:23 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:05:38.584 21:27:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.585 21:27:23 spdk_dd -- dd/common.sh@137 -- # objdump -p /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:38.585 21:27:23 spdk_dd -- dd/common.sh@137 -- # grep NEEDED 00:05:38.585 21:27:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.6.0 == liburing.so.* ]] 00:05:38.585 21:27:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.585 21:27:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.6.0 == liburing.so.* ]] 00:05:38.585 21:27:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.585 21:27:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.7.0 == liburing.so.* ]] 00:05:38.585 21:27:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.585 21:27:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.6.0 == liburing.so.* ]] 00:05:38.585 21:27:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.585 21:27:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.6.0 == liburing.so.* ]] 00:05:38.585 21:27:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.585 21:27:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.6.0 == liburing.so.* ]] 00:05:38.585 21:27:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.585 21:27:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.6.0 == liburing.so.* ]] 00:05:38.585 21:27:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.585 21:27:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.6.0 == liburing.so.* ]] 00:05:38.585 21:27:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.585 21:27:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.6.0 == liburing.so.* ]] 00:05:38.585 21:27:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.585 21:27:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.6.0 == liburing.so.* ]] 00:05:38.585 21:27:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.585 21:27:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.6.0 == liburing.so.* ]] 00:05:38.585 21:27:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.585 21:27:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.6.0 == liburing.so.* ]] 00:05:38.585 21:27:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.585 21:27:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.10.0 == liburing.so.* ]] 00:05:38.585 21:27:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.585 21:27:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.11.0 == liburing.so.* ]] 00:05:38.585 21:27:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.585 21:27:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_lvol.so.10.0 == liburing.so.* ]] 00:05:38.585 21:27:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.585 21:27:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob.so.11.0 == liburing.so.* ]] 00:05:38.585 21:27:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.585 21:27:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_nvme.so.13.1 == liburing.so.* ]] 00:05:38.585 21:27:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.585 21:27:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_provider.so.6.0 == liburing.so.* ]] 00:05:38.585 21:27:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.585 21:27:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_utils.so.1.0 == liburing.so.* ]] 00:05:38.585 21:27:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.585 21:27:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.6.0 == liburing.so.* ]] 00:05:38.585 21:27:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.585 21:27:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.6.0 == liburing.so.* ]] 00:05:38.585 21:27:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.585 21:27:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ftl.so.9.0 == liburing.so.* ]] 00:05:38.585 21:27:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.585 21:27:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.6.0 == liburing.so.* ]] 00:05:38.585 21:27:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.585 21:27:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_virtio.so.7.0 == liburing.so.* ]] 00:05:38.585 21:27:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.585 21:27:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.5.0 == liburing.so.* ]] 00:05:38.585 21:27:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.585 21:27:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.6.0 == liburing.so.* ]] 00:05:38.585 21:27:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.585 21:27:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.6.0 == liburing.so.* ]] 00:05:38.585 21:27:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.585 21:27:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.2.0 == liburing.so.* ]] 00:05:38.585 21:27:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.585 21:27:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.6.0 == liburing.so.* ]] 00:05:38.585 21:27:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.585 21:27:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ioat.so.7.0 == liburing.so.* ]] 00:05:38.585 21:27:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.585 21:27:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.5.0 == liburing.so.* ]] 00:05:38.585 21:27:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.585 21:27:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.3.0 == liburing.so.* ]] 00:05:38.585 21:27:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.585 21:27:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_idxd.so.12.0 == liburing.so.* ]] 00:05:38.585 21:27:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.585 21:27:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.4.0 == liburing.so.* ]] 00:05:38.585 21:27:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.585 21:27:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.15.0 == liburing.so.* ]] 00:05:38.585 21:27:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.585 21:27:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.4.0 == liburing.so.* ]] 00:05:38.585 21:27:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.585 21:27:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.4.0 == liburing.so.* ]] 00:05:38.585 21:27:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.585 21:27:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.6.0 == liburing.so.* ]] 00:05:38.585 21:27:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.585 21:27:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.5.0 == liburing.so.* ]] 00:05:38.585 21:27:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.585 21:27:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_file.so.1.0 == liburing.so.* ]] 00:05:38.585 21:27:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.585 21:27:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_linux.so.1.0 == liburing.so.* ]] 00:05:38.585 21:27:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.585 21:27:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event.so.14.0 == liburing.so.* ]] 00:05:38.585 21:27:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.586 21:27:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.6.0 == liburing.so.* ]] 00:05:38.586 21:27:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.586 21:27:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev.so.16.0 == liburing.so.* ]] 00:05:38.586 21:27:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.586 21:27:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_notify.so.6.0 == liburing.so.* ]] 00:05:38.586 21:27:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.586 21:27:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.6.0 == liburing.so.* ]] 00:05:38.586 21:27:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.586 21:27:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel.so.16.0 == liburing.so.* ]] 00:05:38.586 21:27:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.586 21:27:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_dma.so.4.0 == liburing.so.* ]] 00:05:38.586 21:27:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.586 21:27:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.6.0 == liburing.so.* ]] 00:05:38.586 21:27:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.586 21:27:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vmd.so.6.0 == liburing.so.* ]] 00:05:38.586 21:27:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.586 21:27:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.5.0 == liburing.so.* ]] 00:05:38.586 21:27:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.586 21:27:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock.so.10.0 == liburing.so.* ]] 00:05:38.586 21:27:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.586 21:27:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.3.0 == liburing.so.* ]] 00:05:38.586 21:27:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.586 21:27:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_keyring.so.1.0 == liburing.so.* ]] 00:05:38.586 21:27:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.586 21:27:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_init.so.5.0 == liburing.so.* ]] 00:05:38.586 21:27:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.586 21:27:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_thread.so.10.1 == liburing.so.* ]] 00:05:38.586 21:27:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.586 21:27:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_trace.so.10.0 == liburing.so.* ]] 00:05:38.586 21:27:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.586 21:27:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring.so.1.0 == liburing.so.* ]] 00:05:38.586 21:27:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.586 21:27:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rpc.so.6.0 == liburing.so.* ]] 00:05:38.586 21:27:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.586 21:27:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.6.0 == liburing.so.* ]] 00:05:38.586 21:27:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.586 21:27:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_json.so.6.0 == liburing.so.* ]] 00:05:38.586 21:27:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.586 21:27:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_util.so.10.0 == liburing.so.* ]] 00:05:38.586 21:27:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.586 21:27:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_log.so.7.0 == liburing.so.* ]] 00:05:38.586 21:27:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.586 21:27:23 spdk_dd -- dd/common.sh@143 -- # [[ librte_bus_pci.so.24 == liburing.so.* ]] 00:05:38.586 21:27:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.586 21:27:23 spdk_dd -- dd/common.sh@143 -- # [[ librte_cryptodev.so.24 == liburing.so.* ]] 00:05:38.586 21:27:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.586 21:27:23 spdk_dd -- dd/common.sh@143 -- # [[ librte_dmadev.so.24 == liburing.so.* ]] 00:05:38.586 21:27:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.586 21:27:23 spdk_dd -- dd/common.sh@143 -- # [[ librte_eal.so.24 == liburing.so.* ]] 00:05:38.586 21:27:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.586 21:27:23 spdk_dd -- dd/common.sh@143 -- # [[ librte_ethdev.so.24 == liburing.so.* ]] 00:05:38.586 21:27:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.586 21:27:23 spdk_dd -- dd/common.sh@143 -- # [[ librte_hash.so.24 == liburing.so.* ]] 00:05:38.586 21:27:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.586 21:27:23 spdk_dd -- dd/common.sh@143 -- # [[ librte_kvargs.so.24 == liburing.so.* ]] 00:05:38.586 21:27:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.586 21:27:23 spdk_dd -- dd/common.sh@143 -- # [[ librte_log.so.24 == liburing.so.* ]] 00:05:38.586 21:27:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.586 21:27:23 spdk_dd -- dd/common.sh@143 -- # [[ librte_mbuf.so.24 == liburing.so.* ]] 00:05:38.586 21:27:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.586 21:27:23 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool.so.24 == liburing.so.* ]] 00:05:38.586 21:27:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.586 21:27:23 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.24 == liburing.so.* ]] 00:05:38.586 21:27:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.586 21:27:23 spdk_dd -- dd/common.sh@143 -- # [[ librte_net.so.24 == liburing.so.* ]] 00:05:38.586 21:27:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.586 21:27:23 spdk_dd -- dd/common.sh@143 -- # [[ librte_pci.so.24 == liburing.so.* ]] 00:05:38.586 21:27:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.586 21:27:23 spdk_dd -- dd/common.sh@143 -- # [[ librte_power.so.24 == liburing.so.* ]] 00:05:38.586 21:27:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.586 21:27:23 spdk_dd -- dd/common.sh@143 -- # [[ librte_rcu.so.24 == liburing.so.* ]] 00:05:38.586 21:27:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.586 21:27:23 spdk_dd -- dd/common.sh@143 -- # [[ librte_ring.so.24 == liburing.so.* ]] 00:05:38.586 21:27:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.587 21:27:23 spdk_dd -- dd/common.sh@143 -- # [[ librte_telemetry.so.24 == liburing.so.* ]] 00:05:38.587 21:27:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.587 21:27:23 spdk_dd -- dd/common.sh@143 -- # [[ librte_vhost.so.24 == liburing.so.* ]] 00:05:38.587 21:27:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.587 21:27:23 spdk_dd -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:05:38.587 21:27:23 spdk_dd -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:05:38.587 * spdk_dd linked to liburing 00:05:38.587 21:27:23 spdk_dd -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:05:38.587 21:27:23 spdk_dd -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:05:38.587 21:27:23 spdk_dd -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:05:38.587 21:27:23 spdk_dd -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:05:38.587 21:27:23 spdk_dd -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:05:38.587 21:27:23 spdk_dd -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:05:38.587 21:27:23 spdk_dd -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:05:38.587 21:27:23 spdk_dd -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:05:38.587 21:27:23 spdk_dd -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:05:38.587 21:27:23 spdk_dd -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:05:38.587 21:27:23 spdk_dd -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:05:38.587 21:27:23 spdk_dd -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:05:38.587 21:27:23 spdk_dd -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:05:38.587 21:27:23 spdk_dd -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:05:38.587 21:27:23 spdk_dd -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:05:38.587 21:27:23 spdk_dd -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:05:38.587 21:27:23 spdk_dd -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:05:38.587 21:27:23 spdk_dd -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:05:38.587 21:27:23 spdk_dd -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:05:38.587 21:27:23 spdk_dd -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:05:38.587 21:27:23 spdk_dd -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:05:38.587 21:27:23 spdk_dd -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:05:38.587 21:27:23 spdk_dd -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:05:38.587 21:27:23 spdk_dd -- common/build_config.sh@22 -- # CONFIG_CET=n 00:05:38.587 21:27:23 spdk_dd -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:05:38.587 21:27:23 spdk_dd -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:05:38.587 21:27:23 spdk_dd -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:05:38.587 21:27:23 spdk_dd -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:05:38.587 21:27:23 spdk_dd -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:05:38.587 21:27:23 spdk_dd -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:05:38.587 21:27:23 spdk_dd -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:05:38.587 21:27:23 spdk_dd -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:05:38.587 21:27:23 spdk_dd -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:05:38.587 21:27:23 spdk_dd -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:05:38.587 21:27:23 spdk_dd -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:05:38.587 21:27:23 spdk_dd -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:05:38.587 21:27:23 spdk_dd -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:05:38.587 21:27:23 spdk_dd -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:05:38.587 21:27:23 spdk_dd -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:05:38.587 21:27:23 spdk_dd -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:05:38.587 21:27:23 spdk_dd -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:05:38.587 21:27:23 spdk_dd -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:05:38.587 21:27:23 spdk_dd -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:05:38.587 21:27:23 spdk_dd -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:05:38.587 21:27:23 spdk_dd -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:05:38.587 21:27:23 spdk_dd -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:05:38.587 21:27:23 spdk_dd -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:05:38.587 21:27:23 spdk_dd -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:05:38.587 21:27:23 spdk_dd -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:05:38.587 21:27:23 spdk_dd -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:05:38.587 21:27:23 spdk_dd -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:05:38.587 21:27:23 spdk_dd -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:05:38.587 21:27:23 spdk_dd -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:05:38.587 21:27:23 spdk_dd -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:05:38.587 21:27:23 spdk_dd -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:05:38.587 21:27:23 spdk_dd -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:05:38.587 21:27:23 spdk_dd -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=y 00:05:38.587 21:27:23 spdk_dd -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:05:38.587 21:27:23 spdk_dd -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:05:38.587 21:27:23 spdk_dd -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:05:38.587 21:27:23 spdk_dd -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:05:38.587 21:27:23 spdk_dd -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:05:38.587 21:27:23 spdk_dd -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:05:38.587 21:27:23 spdk_dd -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:05:38.587 21:27:23 spdk_dd -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:05:38.587 21:27:23 spdk_dd -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:05:38.587 21:27:23 spdk_dd -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:05:38.587 21:27:23 spdk_dd -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:05:38.587 21:27:23 spdk_dd -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:05:38.587 21:27:23 spdk_dd -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:05:38.588 21:27:23 spdk_dd -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:05:38.588 21:27:23 spdk_dd -- common/build_config.sh@70 -- # CONFIG_FC=n 00:05:38.588 21:27:23 spdk_dd -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:05:38.588 21:27:23 spdk_dd -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:05:38.588 21:27:23 spdk_dd -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:05:38.588 21:27:23 spdk_dd -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:05:38.588 21:27:23 spdk_dd -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:05:38.588 21:27:23 spdk_dd -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:05:38.588 21:27:23 spdk_dd -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:05:38.588 21:27:23 spdk_dd -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:05:38.588 21:27:23 spdk_dd -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:05:38.588 21:27:23 spdk_dd -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:05:38.588 21:27:23 spdk_dd -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:05:38.588 21:27:23 spdk_dd -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:05:38.588 21:27:23 spdk_dd -- common/build_config.sh@83 -- # CONFIG_URING=y 00:05:38.588 21:27:23 spdk_dd -- dd/common.sh@149 -- # [[ y != y ]] 00:05:38.588 21:27:23 spdk_dd -- dd/common.sh@152 -- # export liburing_in_use=1 00:05:38.588 21:27:23 spdk_dd -- dd/common.sh@152 -- # liburing_in_use=1 00:05:38.588 21:27:23 spdk_dd -- dd/common.sh@153 -- # return 0 00:05:38.588 21:27:23 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:05:38.588 21:27:23 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:05:38.588 21:27:23 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:38.588 21:27:23 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:38.588 21:27:23 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:05:38.588 ************************************ 00:05:38.588 START TEST spdk_dd_basic_rw 00:05:38.588 ************************************ 00:05:38.588 21:27:23 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:05:38.588 * Looking for test storage... 00:05:38.588 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:05:38.588 21:27:23 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:38.588 21:27:23 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:38.588 21:27:23 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:38.588 21:27:23 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:38.588 21:27:23 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:38.588 21:27:23 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:38.588 21:27:23 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:38.588 21:27:23 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 00:05:38.588 21:27:23 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:38.588 21:27:23 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:05:38.588 21:27:23 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:05:38.588 21:27:23 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:05:38.588 21:27:23 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:05:38.588 21:27:23 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:05:38.588 21:27:23 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:05:38.588 21:27:23 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:05:38.588 21:27:23 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:05:38.588 21:27:23 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:38.588 21:27:23 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:05:38.588 21:27:23 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:05:38.588 21:27:23 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:05:38.588 21:27:23 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:05:38.851 21:27:23 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 57 Data Units Written: 3 Host Read Commands: 1329 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:05:38.851 21:27:23 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:05:38.852 21:27:23 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 57 Data Units Written: 3 Host Read Commands: 1329 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:05:38.852 21:27:23 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:05:38.852 21:27:23 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:05:38.852 21:27:23 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:05:38.852 21:27:23 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:05:38.852 21:27:23 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:05:38.852 21:27:23 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:05:38.852 21:27:23 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:05:38.852 21:27:23 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:38.852 21:27:23 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:05:38.852 21:27:23 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:38.852 21:27:23 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:05:38.852 ************************************ 00:05:38.852 START TEST dd_bs_lt_native_bs 00:05:38.852 ************************************ 00:05:38.852 21:27:23 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1125 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:05:38.852 21:27:23 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@650 -- # local es=0 00:05:38.852 21:27:23 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:05:38.852 21:27:23 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:38.852 21:27:23 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:38.852 21:27:23 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:38.852 21:27:23 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:38.852 21:27:23 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:38.852 21:27:23 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:38.852 21:27:23 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:38.852 21:27:23 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:05:38.852 21:27:23 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:05:38.853 { 00:05:38.853 "subsystems": [ 00:05:38.853 { 00:05:38.853 "subsystem": "bdev", 00:05:38.853 "config": [ 00:05:38.853 { 00:05:38.853 "params": { 00:05:38.853 "trtype": "pcie", 00:05:38.853 "traddr": "0000:00:10.0", 00:05:38.853 "name": "Nvme0" 00:05:38.853 }, 00:05:38.853 "method": "bdev_nvme_attach_controller" 00:05:38.853 }, 00:05:38.853 { 00:05:38.853 "method": "bdev_wait_for_examine" 00:05:38.853 } 00:05:38.853 ] 00:05:38.853 } 00:05:38.853 ] 00:05:38.853 } 00:05:38.853 [2024-07-24 21:27:23.733556] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:05:38.853 [2024-07-24 21:27:23.733679] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61569 ] 00:05:39.111 [2024-07-24 21:27:23.866317] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.111 [2024-07-24 21:27:24.007237] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.111 [2024-07-24 21:27:24.084657] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:39.370 [2024-07-24 21:27:24.198456] spdk_dd.c:1161:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:05:39.370 [2024-07-24 21:27:24.198523] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:39.370 [2024-07-24 21:27:24.367854] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:05:39.629 21:27:24 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@653 -- # es=234 00:05:39.629 21:27:24 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:39.629 21:27:24 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@662 -- # es=106 00:05:39.629 21:27:24 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@663 -- # case "$es" in 00:05:39.629 21:27:24 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@670 -- # es=1 00:05:39.629 21:27:24 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:39.629 00:05:39.629 real 0m0.841s 00:05:39.629 user 0m0.575s 00:05:39.629 sys 0m0.211s 00:05:39.629 ************************************ 00:05:39.629 END TEST dd_bs_lt_native_bs 00:05:39.629 ************************************ 00:05:39.629 21:27:24 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:39.629 21:27:24 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:05:39.629 21:27:24 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:05:39.630 21:27:24 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:05:39.630 21:27:24 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:39.630 21:27:24 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:05:39.630 ************************************ 00:05:39.630 START TEST dd_rw 00:05:39.630 ************************************ 00:05:39.630 21:27:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1125 -- # basic_rw 4096 00:05:39.630 21:27:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:05:39.630 21:27:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:05:39.630 21:27:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:05:39.630 21:27:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:05:39.630 21:27:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:05:39.630 21:27:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:05:39.630 21:27:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:05:39.630 21:27:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:05:39.630 21:27:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:05:39.630 21:27:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:05:39.630 21:27:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:05:39.630 21:27:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:05:39.630 21:27:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:05:39.630 21:27:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:05:39.630 21:27:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:05:39.630 21:27:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:05:39.630 21:27:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:05:39.630 21:27:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:40.198 21:27:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:05:40.198 21:27:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:05:40.198 21:27:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:40.198 21:27:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:40.456 { 00:05:40.456 "subsystems": [ 00:05:40.456 { 00:05:40.456 "subsystem": "bdev", 00:05:40.456 "config": [ 00:05:40.456 { 00:05:40.456 "params": { 00:05:40.456 "trtype": "pcie", 00:05:40.456 "traddr": "0000:00:10.0", 00:05:40.456 "name": "Nvme0" 00:05:40.456 }, 00:05:40.456 "method": "bdev_nvme_attach_controller" 00:05:40.456 }, 00:05:40.456 { 00:05:40.456 "method": "bdev_wait_for_examine" 00:05:40.456 } 00:05:40.456 ] 00:05:40.456 } 00:05:40.456 ] 00:05:40.456 } 00:05:40.456 [2024-07-24 21:27:25.248676] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:05:40.456 [2024-07-24 21:27:25.248816] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61608 ] 00:05:40.456 [2024-07-24 21:27:25.389522] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.728 [2024-07-24 21:27:25.515633] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.728 [2024-07-24 21:27:25.587128] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:41.309  Copying: 60/60 [kB] (average 19 MBps) 00:05:41.309 00:05:41.309 21:27:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:05:41.309 21:27:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:05:41.309 21:27:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:41.309 21:27:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:41.309 [2024-07-24 21:27:26.088371] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:05:41.309 [2024-07-24 21:27:26.089167] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61627 ] 00:05:41.309 { 00:05:41.309 "subsystems": [ 00:05:41.309 { 00:05:41.309 "subsystem": "bdev", 00:05:41.309 "config": [ 00:05:41.309 { 00:05:41.309 "params": { 00:05:41.309 "trtype": "pcie", 00:05:41.309 "traddr": "0000:00:10.0", 00:05:41.309 "name": "Nvme0" 00:05:41.309 }, 00:05:41.309 "method": "bdev_nvme_attach_controller" 00:05:41.309 }, 00:05:41.309 { 00:05:41.309 "method": "bdev_wait_for_examine" 00:05:41.309 } 00:05:41.309 ] 00:05:41.309 } 00:05:41.309 ] 00:05:41.309 } 00:05:41.309 [2024-07-24 21:27:26.220417] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.568 [2024-07-24 21:27:26.336198] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.568 [2024-07-24 21:27:26.407917] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:42.154  Copying: 60/60 [kB] (average 19 MBps) 00:05:42.154 00:05:42.154 21:27:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:42.154 21:27:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:05:42.154 21:27:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:05:42.154 21:27:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:05:42.154 21:27:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:05:42.154 21:27:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:05:42.154 21:27:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:05:42.154 21:27:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:05:42.154 21:27:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:05:42.154 21:27:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:42.154 21:27:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:42.154 [2024-07-24 21:27:26.931314] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:05:42.154 [2024-07-24 21:27:26.931432] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61642 ] 00:05:42.154 { 00:05:42.154 "subsystems": [ 00:05:42.154 { 00:05:42.154 "subsystem": "bdev", 00:05:42.154 "config": [ 00:05:42.154 { 00:05:42.154 "params": { 00:05:42.154 "trtype": "pcie", 00:05:42.154 "traddr": "0000:00:10.0", 00:05:42.154 "name": "Nvme0" 00:05:42.154 }, 00:05:42.154 "method": "bdev_nvme_attach_controller" 00:05:42.154 }, 00:05:42.154 { 00:05:42.154 "method": "bdev_wait_for_examine" 00:05:42.154 } 00:05:42.154 ] 00:05:42.154 } 00:05:42.154 ] 00:05:42.154 } 00:05:42.154 [2024-07-24 21:27:27.068521] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.414 [2024-07-24 21:27:27.215441] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.414 [2024-07-24 21:27:27.302431] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:42.932  Copying: 1024/1024 [kB] (average 500 MBps) 00:05:42.932 00:05:42.932 21:27:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:05:42.932 21:27:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:05:42.932 21:27:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:05:42.932 21:27:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:05:42.932 21:27:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:05:42.932 21:27:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:05:42.932 21:27:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:43.499 21:27:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:05:43.499 21:27:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:05:43.499 21:27:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:43.499 21:27:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:43.499 [2024-07-24 21:27:28.389897] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:05:43.499 [2024-07-24 21:27:28.389985] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61667 ] 00:05:43.499 { 00:05:43.499 "subsystems": [ 00:05:43.499 { 00:05:43.499 "subsystem": "bdev", 00:05:43.499 "config": [ 00:05:43.499 { 00:05:43.499 "params": { 00:05:43.499 "trtype": "pcie", 00:05:43.499 "traddr": "0000:00:10.0", 00:05:43.499 "name": "Nvme0" 00:05:43.499 }, 00:05:43.499 "method": "bdev_nvme_attach_controller" 00:05:43.499 }, 00:05:43.499 { 00:05:43.499 "method": "bdev_wait_for_examine" 00:05:43.499 } 00:05:43.499 ] 00:05:43.499 } 00:05:43.499 ] 00:05:43.499 } 00:05:43.757 [2024-07-24 21:27:28.517697] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.757 [2024-07-24 21:27:28.678692] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.757 [2024-07-24 21:27:28.751421] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:44.272  Copying: 60/60 [kB] (average 58 MBps) 00:05:44.272 00:05:44.272 21:27:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:05:44.272 21:27:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:05:44.272 21:27:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:44.272 21:27:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:44.272 [2024-07-24 21:27:29.235655] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:05:44.272 [2024-07-24 21:27:29.235775] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61686 ] 00:05:44.272 { 00:05:44.272 "subsystems": [ 00:05:44.272 { 00:05:44.272 "subsystem": "bdev", 00:05:44.272 "config": [ 00:05:44.272 { 00:05:44.272 "params": { 00:05:44.272 "trtype": "pcie", 00:05:44.272 "traddr": "0000:00:10.0", 00:05:44.272 "name": "Nvme0" 00:05:44.272 }, 00:05:44.272 "method": "bdev_nvme_attach_controller" 00:05:44.272 }, 00:05:44.272 { 00:05:44.272 "method": "bdev_wait_for_examine" 00:05:44.272 } 00:05:44.272 ] 00:05:44.272 } 00:05:44.272 ] 00:05:44.272 } 00:05:44.529 [2024-07-24 21:27:29.371467] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.787 [2024-07-24 21:27:29.535784] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.787 [2024-07-24 21:27:29.608841] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:45.044  Copying: 60/60 [kB] (average 58 MBps) 00:05:45.044 00:05:45.044 21:27:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:45.044 21:27:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:05:45.044 21:27:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:05:45.044 21:27:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:05:45.044 21:27:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:05:45.044 21:27:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:05:45.045 21:27:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:05:45.045 21:27:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:05:45.045 21:27:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:05:45.045 21:27:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:45.045 21:27:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:45.302 [2024-07-24 21:27:30.092891] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:05:45.302 [2024-07-24 21:27:30.093459] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61707 ] 00:05:45.302 { 00:05:45.302 "subsystems": [ 00:05:45.302 { 00:05:45.302 "subsystem": "bdev", 00:05:45.302 "config": [ 00:05:45.302 { 00:05:45.302 "params": { 00:05:45.302 "trtype": "pcie", 00:05:45.302 "traddr": "0000:00:10.0", 00:05:45.302 "name": "Nvme0" 00:05:45.302 }, 00:05:45.302 "method": "bdev_nvme_attach_controller" 00:05:45.302 }, 00:05:45.302 { 00:05:45.302 "method": "bdev_wait_for_examine" 00:05:45.302 } 00:05:45.302 ] 00:05:45.302 } 00:05:45.302 ] 00:05:45.302 } 00:05:45.302 [2024-07-24 21:27:30.226066] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.560 [2024-07-24 21:27:30.383291] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.560 [2024-07-24 21:27:30.456137] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:46.076  Copying: 1024/1024 [kB] (average 1000 MBps) 00:05:46.076 00:05:46.076 21:27:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:05:46.076 21:27:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:05:46.076 21:27:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:05:46.076 21:27:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:05:46.076 21:27:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:05:46.076 21:27:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:05:46.076 21:27:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:05:46.076 21:27:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:46.642 21:27:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:05:46.642 21:27:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:05:46.642 21:27:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:46.642 21:27:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:46.900 { 00:05:46.900 "subsystems": [ 00:05:46.900 { 00:05:46.900 "subsystem": "bdev", 00:05:46.900 "config": [ 00:05:46.900 { 00:05:46.900 "params": { 00:05:46.900 "trtype": "pcie", 00:05:46.900 "traddr": "0000:00:10.0", 00:05:46.900 "name": "Nvme0" 00:05:46.900 }, 00:05:46.900 "method": "bdev_nvme_attach_controller" 00:05:46.900 }, 00:05:46.900 { 00:05:46.900 "method": "bdev_wait_for_examine" 00:05:46.900 } 00:05:46.900 ] 00:05:46.900 } 00:05:46.900 ] 00:05:46.900 } 00:05:46.900 [2024-07-24 21:27:31.651581] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:05:46.900 [2024-07-24 21:27:31.651685] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61726 ] 00:05:46.900 [2024-07-24 21:27:31.782506] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.159 [2024-07-24 21:27:31.937742] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.159 [2024-07-24 21:27:32.015714] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:47.726  Copying: 56/56 [kB] (average 54 MBps) 00:05:47.726 00:05:47.726 21:27:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:05:47.726 21:27:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:05:47.726 21:27:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:47.726 21:27:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:47.726 [2024-07-24 21:27:32.494572] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:05:47.726 [2024-07-24 21:27:32.494702] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61745 ] 00:05:47.726 { 00:05:47.726 "subsystems": [ 00:05:47.726 { 00:05:47.726 "subsystem": "bdev", 00:05:47.726 "config": [ 00:05:47.726 { 00:05:47.726 "params": { 00:05:47.726 "trtype": "pcie", 00:05:47.726 "traddr": "0000:00:10.0", 00:05:47.726 "name": "Nvme0" 00:05:47.726 }, 00:05:47.726 "method": "bdev_nvme_attach_controller" 00:05:47.726 }, 00:05:47.726 { 00:05:47.726 "method": "bdev_wait_for_examine" 00:05:47.726 } 00:05:47.726 ] 00:05:47.726 } 00:05:47.726 ] 00:05:47.726 } 00:05:47.726 [2024-07-24 21:27:32.630068] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.984 [2024-07-24 21:27:32.775874] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.984 [2024-07-24 21:27:32.859162] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:48.502  Copying: 56/56 [kB] (average 54 MBps) 00:05:48.502 00:05:48.502 21:27:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:48.502 21:27:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:05:48.502 21:27:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:05:48.502 21:27:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:05:48.502 21:27:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:05:48.502 21:27:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:05:48.502 21:27:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:05:48.502 21:27:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:05:48.502 21:27:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:05:48.502 21:27:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:48.502 21:27:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:48.502 { 00:05:48.502 "subsystems": [ 00:05:48.502 { 00:05:48.502 "subsystem": "bdev", 00:05:48.502 "config": [ 00:05:48.502 { 00:05:48.502 "params": { 00:05:48.502 "trtype": "pcie", 00:05:48.502 "traddr": "0000:00:10.0", 00:05:48.502 "name": "Nvme0" 00:05:48.502 }, 00:05:48.502 "method": "bdev_nvme_attach_controller" 00:05:48.502 }, 00:05:48.502 { 00:05:48.502 "method": "bdev_wait_for_examine" 00:05:48.502 } 00:05:48.502 ] 00:05:48.502 } 00:05:48.502 ] 00:05:48.502 } 00:05:48.502 [2024-07-24 21:27:33.416643] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:05:48.502 [2024-07-24 21:27:33.416753] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61766 ] 00:05:48.760 [2024-07-24 21:27:33.558323] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.760 [2024-07-24 21:27:33.687936] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.019 [2024-07-24 21:27:33.767313] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:49.278  Copying: 1024/1024 [kB] (average 1000 MBps) 00:05:49.278 00:05:49.278 21:27:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:05:49.278 21:27:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:05:49.278 21:27:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:05:49.278 21:27:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:05:49.278 21:27:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:05:49.278 21:27:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:05:49.278 21:27:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:49.845 21:27:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:05:49.845 21:27:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:05:49.845 21:27:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:49.845 21:27:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:50.103 [2024-07-24 21:27:34.854704] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:05:50.103 [2024-07-24 21:27:34.854799] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61785 ] 00:05:50.103 { 00:05:50.103 "subsystems": [ 00:05:50.103 { 00:05:50.103 "subsystem": "bdev", 00:05:50.103 "config": [ 00:05:50.103 { 00:05:50.103 "params": { 00:05:50.103 "trtype": "pcie", 00:05:50.103 "traddr": "0000:00:10.0", 00:05:50.103 "name": "Nvme0" 00:05:50.103 }, 00:05:50.103 "method": "bdev_nvme_attach_controller" 00:05:50.103 }, 00:05:50.103 { 00:05:50.103 "method": "bdev_wait_for_examine" 00:05:50.103 } 00:05:50.104 ] 00:05:50.104 } 00:05:50.104 ] 00:05:50.104 } 00:05:50.104 [2024-07-24 21:27:34.984994] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.361 [2024-07-24 21:27:35.122422] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.361 [2024-07-24 21:27:35.203330] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:50.621  Copying: 56/56 [kB] (average 54 MBps) 00:05:50.621 00:05:50.621 21:27:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:05:50.621 21:27:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:05:50.621 21:27:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:50.621 21:27:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:50.880 [2024-07-24 21:27:35.651027] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:05:50.880 [2024-07-24 21:27:35.651134] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61804 ] 00:05:50.880 { 00:05:50.880 "subsystems": [ 00:05:50.880 { 00:05:50.880 "subsystem": "bdev", 00:05:50.880 "config": [ 00:05:50.880 { 00:05:50.880 "params": { 00:05:50.880 "trtype": "pcie", 00:05:50.880 "traddr": "0000:00:10.0", 00:05:50.880 "name": "Nvme0" 00:05:50.880 }, 00:05:50.880 "method": "bdev_nvme_attach_controller" 00:05:50.880 }, 00:05:50.880 { 00:05:50.880 "method": "bdev_wait_for_examine" 00:05:50.880 } 00:05:50.880 ] 00:05:50.880 } 00:05:50.880 ] 00:05:50.880 } 00:05:50.880 [2024-07-24 21:27:35.785766] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.138 [2024-07-24 21:27:35.905370] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.138 [2024-07-24 21:27:35.979993] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:51.397  Copying: 56/56 [kB] (average 54 MBps) 00:05:51.397 00:05:51.397 21:27:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:51.397 21:27:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:05:51.397 21:27:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:05:51.397 21:27:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:05:51.397 21:27:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:05:51.397 21:27:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:05:51.397 21:27:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:05:51.397 21:27:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:05:51.397 21:27:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:05:51.397 21:27:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:51.397 21:27:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:51.655 [2024-07-24 21:27:36.442377] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:05:51.655 [2024-07-24 21:27:36.442477] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61825 ] 00:05:51.655 { 00:05:51.655 "subsystems": [ 00:05:51.655 { 00:05:51.655 "subsystem": "bdev", 00:05:51.655 "config": [ 00:05:51.655 { 00:05:51.655 "params": { 00:05:51.656 "trtype": "pcie", 00:05:51.656 "traddr": "0000:00:10.0", 00:05:51.656 "name": "Nvme0" 00:05:51.656 }, 00:05:51.656 "method": "bdev_nvme_attach_controller" 00:05:51.656 }, 00:05:51.656 { 00:05:51.656 "method": "bdev_wait_for_examine" 00:05:51.656 } 00:05:51.656 ] 00:05:51.656 } 00:05:51.656 ] 00:05:51.656 } 00:05:51.656 [2024-07-24 21:27:36.580141] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.914 [2024-07-24 21:27:36.683247] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.914 [2024-07-24 21:27:36.760265] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:52.481  Copying: 1024/1024 [kB] (average 500 MBps) 00:05:52.481 00:05:52.481 21:27:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:05:52.481 21:27:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:05:52.481 21:27:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:05:52.481 21:27:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:05:52.481 21:27:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:05:52.481 21:27:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:05:52.481 21:27:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:05:52.481 21:27:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:53.049 21:27:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:05:53.049 21:27:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:05:53.049 21:27:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:53.049 21:27:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:53.049 [2024-07-24 21:27:37.791694] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:05:53.049 [2024-07-24 21:27:37.791813] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61844 ] 00:05:53.049 { 00:05:53.049 "subsystems": [ 00:05:53.049 { 00:05:53.049 "subsystem": "bdev", 00:05:53.049 "config": [ 00:05:53.049 { 00:05:53.049 "params": { 00:05:53.049 "trtype": "pcie", 00:05:53.049 "traddr": "0000:00:10.0", 00:05:53.049 "name": "Nvme0" 00:05:53.049 }, 00:05:53.049 "method": "bdev_nvme_attach_controller" 00:05:53.049 }, 00:05:53.049 { 00:05:53.049 "method": "bdev_wait_for_examine" 00:05:53.049 } 00:05:53.049 ] 00:05:53.049 } 00:05:53.049 ] 00:05:53.049 } 00:05:53.049 [2024-07-24 21:27:37.923598] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.049 [2024-07-24 21:27:38.042332] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.308 [2024-07-24 21:27:38.119359] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:53.567  Copying: 48/48 [kB] (average 46 MBps) 00:05:53.567 00:05:53.567 21:27:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:05:53.567 21:27:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:05:53.567 21:27:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:53.567 21:27:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:53.825 [2024-07-24 21:27:38.571606] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:05:53.825 [2024-07-24 21:27:38.571732] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61863 ] 00:05:53.825 { 00:05:53.825 "subsystems": [ 00:05:53.825 { 00:05:53.825 "subsystem": "bdev", 00:05:53.825 "config": [ 00:05:53.825 { 00:05:53.825 "params": { 00:05:53.825 "trtype": "pcie", 00:05:53.825 "traddr": "0000:00:10.0", 00:05:53.825 "name": "Nvme0" 00:05:53.825 }, 00:05:53.825 "method": "bdev_nvme_attach_controller" 00:05:53.825 }, 00:05:53.825 { 00:05:53.825 "method": "bdev_wait_for_examine" 00:05:53.825 } 00:05:53.825 ] 00:05:53.825 } 00:05:53.825 ] 00:05:53.825 } 00:05:53.825 [2024-07-24 21:27:38.702811] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.084 [2024-07-24 21:27:38.849180] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.084 [2024-07-24 21:27:38.923854] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:54.651  Copying: 48/48 [kB] (average 23 MBps) 00:05:54.652 00:05:54.652 21:27:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:54.652 21:27:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:05:54.652 21:27:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:05:54.652 21:27:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:05:54.652 21:27:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:05:54.652 21:27:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:05:54.652 21:27:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:05:54.652 21:27:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:05:54.652 21:27:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:05:54.652 21:27:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:54.652 21:27:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:54.652 { 00:05:54.652 "subsystems": [ 00:05:54.652 { 00:05:54.652 "subsystem": "bdev", 00:05:54.652 "config": [ 00:05:54.652 { 00:05:54.652 "params": { 00:05:54.652 "trtype": "pcie", 00:05:54.652 "traddr": "0000:00:10.0", 00:05:54.652 "name": "Nvme0" 00:05:54.652 }, 00:05:54.652 "method": "bdev_nvme_attach_controller" 00:05:54.652 }, 00:05:54.652 { 00:05:54.652 "method": "bdev_wait_for_examine" 00:05:54.652 } 00:05:54.652 ] 00:05:54.652 } 00:05:54.652 ] 00:05:54.652 } 00:05:54.652 [2024-07-24 21:27:39.422612] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:05:54.652 [2024-07-24 21:27:39.422755] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61884 ] 00:05:54.652 [2024-07-24 21:27:39.559311] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.910 [2024-07-24 21:27:39.655574] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.910 [2024-07-24 21:27:39.731524] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:55.168  Copying: 1024/1024 [kB] (average 500 MBps) 00:05:55.168 00:05:55.168 21:27:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:05:55.168 21:27:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:05:55.168 21:27:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:05:55.168 21:27:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:05:55.168 21:27:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:05:55.168 21:27:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:05:55.168 21:27:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:55.736 21:27:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:05:55.736 21:27:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:05:55.736 21:27:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:55.736 21:27:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:55.736 { 00:05:55.736 "subsystems": [ 00:05:55.736 { 00:05:55.736 "subsystem": "bdev", 00:05:55.736 "config": [ 00:05:55.736 { 00:05:55.736 "params": { 00:05:55.736 "trtype": "pcie", 00:05:55.736 "traddr": "0000:00:10.0", 00:05:55.736 "name": "Nvme0" 00:05:55.736 }, 00:05:55.736 "method": "bdev_nvme_attach_controller" 00:05:55.736 }, 00:05:55.736 { 00:05:55.736 "method": "bdev_wait_for_examine" 00:05:55.736 } 00:05:55.736 ] 00:05:55.736 } 00:05:55.736 ] 00:05:55.736 } 00:05:55.995 [2024-07-24 21:27:40.737610] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:05:55.995 [2024-07-24 21:27:40.737747] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61903 ] 00:05:55.995 [2024-07-24 21:27:40.876662] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.254 [2024-07-24 21:27:41.012393] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.254 [2024-07-24 21:27:41.092612] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:56.822  Copying: 48/48 [kB] (average 46 MBps) 00:05:56.822 00:05:56.822 21:27:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:05:56.822 21:27:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:05:56.822 21:27:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:56.822 21:27:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:56.822 { 00:05:56.822 "subsystems": [ 00:05:56.822 { 00:05:56.822 "subsystem": "bdev", 00:05:56.822 "config": [ 00:05:56.822 { 00:05:56.822 "params": { 00:05:56.822 "trtype": "pcie", 00:05:56.822 "traddr": "0000:00:10.0", 00:05:56.822 "name": "Nvme0" 00:05:56.822 }, 00:05:56.822 "method": "bdev_nvme_attach_controller" 00:05:56.822 }, 00:05:56.822 { 00:05:56.822 "method": "bdev_wait_for_examine" 00:05:56.822 } 00:05:56.822 ] 00:05:56.822 } 00:05:56.822 ] 00:05:56.822 } 00:05:56.822 [2024-07-24 21:27:41.590152] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:05:56.822 [2024-07-24 21:27:41.590259] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61922 ] 00:05:56.822 [2024-07-24 21:27:41.728913] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.081 [2024-07-24 21:27:41.854769] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.081 [2024-07-24 21:27:41.934777] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:57.649  Copying: 48/48 [kB] (average 46 MBps) 00:05:57.649 00:05:57.649 21:27:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:57.649 21:27:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:05:57.649 21:27:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:05:57.649 21:27:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:05:57.649 21:27:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:05:57.649 21:27:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:05:57.649 21:27:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:05:57.649 21:27:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:05:57.649 21:27:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:05:57.649 21:27:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:57.649 21:27:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:57.649 { 00:05:57.649 "subsystems": [ 00:05:57.649 { 00:05:57.649 "subsystem": "bdev", 00:05:57.649 "config": [ 00:05:57.649 { 00:05:57.649 "params": { 00:05:57.649 "trtype": "pcie", 00:05:57.649 "traddr": "0000:00:10.0", 00:05:57.649 "name": "Nvme0" 00:05:57.649 }, 00:05:57.649 "method": "bdev_nvme_attach_controller" 00:05:57.649 }, 00:05:57.649 { 00:05:57.649 "method": "bdev_wait_for_examine" 00:05:57.649 } 00:05:57.649 ] 00:05:57.649 } 00:05:57.649 ] 00:05:57.649 } 00:05:57.649 [2024-07-24 21:27:42.414903] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:05:57.649 [2024-07-24 21:27:42.415025] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61938 ] 00:05:57.649 [2024-07-24 21:27:42.550677] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.649 [2024-07-24 21:27:42.643704] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.908 [2024-07-24 21:27:42.718600] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:58.167  Copying: 1024/1024 [kB] (average 500 MBps) 00:05:58.167 00:05:58.167 00:05:58.167 real 0m18.542s 00:05:58.167 user 0m13.591s 00:05:58.167 sys 0m7.269s 00:05:58.167 21:27:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:58.167 21:27:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:58.167 ************************************ 00:05:58.167 END TEST dd_rw 00:05:58.167 ************************************ 00:05:58.426 21:27:43 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:05:58.426 21:27:43 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:58.426 21:27:43 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:58.426 21:27:43 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:05:58.426 ************************************ 00:05:58.426 START TEST dd_rw_offset 00:05:58.426 ************************************ 00:05:58.426 21:27:43 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1125 -- # basic_offset 00:05:58.426 21:27:43 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:05:58.426 21:27:43 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:05:58.426 21:27:43 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:05:58.426 21:27:43 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:05:58.426 21:27:43 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:05:58.426 21:27:43 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=5cmrn87qbqnm7yn1nj1ncnyfl274qt00bweqektjsanwang2dtkrboc1q28i5qaqsjeglf0k8u7ymbvjwh2t6mo5x0yeh2k0oxfzhwa8dawqodt6b0hhnn62z5cxhcw1revwca87xlx53pkv2jxpu55k7mayxozqv4upi7akte2ya4vzcc97ag2bg967mtlxeufkk6zebzbmyud29uvs8b3vv0ikmzqiapnk4pdwlryhd5hh5b5cup10zx9ds4xmj7dbnrkgg8fglwa7o3opx5b6o4swzadw0wsuhb5esd0reum74xufst1nz3sfmqfd55iwxt3b9ra55f7r7mcedis2e74woczrto699poo1jxlxiygxu2k8s415m5mev0ojkvw2mqi66hgiikvxw13qd0wcndkjcp49kmpz8sz21t2baa7f9t55e942j2n5ne31xamj16bv8gf7gbb7j66tk4j5t6w5u208ur19gblogsv7kxiou84ry1vk0yq41m82imm49ob2ih47v33fra9svo13wgxqwduxp3okvg2fh3gfdln8i9kx4cd36a0e6ch4pcrrguf8arn7xu1clf1tuqfdzstg669wi76m1q55yso5u748noazbridne1k4s6pydtqt2mvek8iae4nlpjkg9ltvii1kugbvknoywgzdock2fqq5rmgdzg07bzgank9fcf68onlsyjis6nwuzxtk30m5yweaxu9dzcnn3kjxhzy3u72nvvemidl4kcj70tgvo7otaiprdbv4am9bz1eshcbuormc3huubh7thhx4ditco0zic8grr9xb2tlcy225bjdrty7whj129qpxo3wdwufeipuhfb8mjz2gmlylfelt5yyfigeq4m6007zesrjxex8ton4rv07jssoxjsnleqxcrdnufnovx02ccd1wxhnb8qv3t7sk9sazae9p2riav13tmhlcmgjhn6p2sqkytotqtan9ta00hsxnz1ci308zx9fkvd6005vy1ta4o2uu0unybt5y7kajy1lb2a0bfxk9devv6thin5w6l1741h3s2xmuztpee3fvt6f6i302xh7kbbiwudf8n54tlw8kmhw45ipds3xfbxhucywtnt5nwz1go0nqn8gyfgozv1ptqucdl44935nu2g0bi6p7nkd5nbsyfmlz5ikf8x9dwdw0bg5w1yk9xu9gpt7rtkb60z17iudatxbaetuiojkelyq14mvuxgrmp32korrdlceff6ld5u2xaoiuj4ejeetkxbpgkizw9ej0ldyurg2be4d46sre5o1lu3smprzud0r26splfxqaccmh615vegugoeiybh3wp7786a6tpylr8xwgdz2x58pb0hd4uxmyfiy2cwyady0owl18u1c9d4lrrsqlg67vkp9tnx8baaczr9er0hsfim0zu1f7xivz0f5fcxiohjzknxdfwko3p7j44hdlem4g47gdu4vfy6jzii12rz8oc1kwb1yqwlq2qn8mmkv83xvq248iqf9o5ngvuzhgtxmj6og082fjtddjdtbssinbe93d4uljgzs126igpiv1cp9o05yhvbbvfunq7odj41n7daxs3gqulhvo4jawh1mf60661mndrh2v6y8lwgo17i2voy3ytf1s8na2we3ehfkamh2sqol57k5xkaauz5k1qdggm954omds0upvj4ewm8bipa6t3huf6h8l78dqgvuvgk975ajblr8k4pwl09shjhob2k5w5glhmtj63gd90x0zof0i6xtbddumhx8pqye9y3377tw2453mbwn4j6dc50oq0r0dru1ycqv95sx3fvklh6hwk0z05xq3fwwvdj57h7v6bm0n4rqs7rmb5j9ahvrwx0u8r69pn8v6njp10pp9by2h693e752p6jabbvfhfoyplnb262ycv22da34c27hq1q9a2w8mgjybojjcepgb8v88jrj63wwf9umsvur6nzhedpadpe6w8lx941g1yqzr4f6olm50gac13kbwvemeuay4ef6xfe9lm7w52fdfbqk7e938ulj4lmbmafukjixegdatevxgb5i998sa48umtc09qjnn2h6q7fsvasiyy4x26kshxn28zvu9kjqhwwtzettf985vprbo8hbeltab42d8bwhfjwda2ctc1b9izab3fg0l4o5suhi2twj2ydw469wgb8a35ek56g3l0obxuofus5mjy94xw7riupb6b2wv3yutsdszdiqn4nnid9nnrxgzl3byi87u9ldj53zapz88u72o3fc4unbn35jpr1vd1oaj2n5rwm6s7cwe2gxejcd0wau5pddsmd4l1w5b0l2xxxyu2qnh5n1x37qkpdsqh6xib1q5xfrxerk8420wt684dtr2ym087z7h1tqwixj4k8rsmor2bhuewfqhg0o5ws2gozd5nsiuoun6gcdr1s69vf2xt5hzefxlkyha72hjjrkt6d8jyxd2oe10wlvcuzj3jppn93rqsx7xpa4dww9lf6grr01vxrssvpq0xmcixb0fv3vx9nwx6d1su3pdimxekea02ixpcz6q2njflo3q950op3syl5xl55fv5gnzveh1ge3mrqgxessb2672bhgp8v8kx05xm4dmt9lgglapnrsgjpwysubacuxer0ftuj6mzg720uu2cw263j9qwg7qko7vomm232bssu5al2qu6g18aeec5uor9jjjbrdqpprlotmcba39eywdt2v3u0r1g2ho0dlv5e40at864cvyqp9md82nmz2am999djto24ibmpgcacqcq9kwwfesjy9hzt81g8x3uhmv6p0zhgmihlynjqahnmrfhv9gmx7ewgmfe7lymo3ijmbw069qfjj02n6t5mfz1yiz4cbaaf3d6wh0jxt03edvvm4mj10c6istecjs96f4q0qd5m3eg95003n349butt9utgoqk69g5qhvv7bv51j6t44h75efdo8soo2c488l0p3jbg0yg9mxnpc4wd3hmcyfuwj78q01qt0cqdd5iirpkzy0yt74r35r808pil9iny9znm5y3ojhsbucmhtpgr8d6tzen67w6bhuz0mzzrb3qyqticuzslh88756gpqmbwoblooetgtjibqlx53bpmnmqhorvt5c2zlym6dnogh43w0jp7g6clycmm0hkq7dr12nc09bev03u6tp46zxcmosgyf1uca0jlqfp40sacr8xrpk4pue9mt5qgqrwkcb3q12b77rwrw1z6t8r5nhebqj2vvvow7ls7asdll84l1fqltbbfqbz70ao4f7zxgdpidtv0xjq8uba7otvlgjg7plurluaspjuh0tvcspodh511448pvy7fj501gwd3y4whmfbng0anu1cmq535m63w3klafrfdk48c78nvdmtkeg8sts7csjjf6hvlhjecglanab3ouw6bppz243rlt4u4duwfgdhlqelhqlpp8dtli4ldkqjuny8o8tph8hbpvdmmp61jpj901s0r71qf3g0pqghmfxjjmka5rgq383foaxbqpbgihz7287t6at61ui032rta0v5on419wsopr9017f101c32pevc276tlxylqcz9hv73cb5mfoqmqri8t9e404f4p4kuart2k8y8zk5zjk15633deci0iotlp53nd6i1onswzvadjd1wfcs08xal3g05vaq7sji8cw9s2amntfb67sk3lahvufnydqqnmh5dqgz18obmeglne5wjqvlg45lmc5nbisuowddvycg3j83msl4jh0988igncb1vnostp8mb0ydx0kopxp4l2a9kauax8bxomdtlnpuomklnq785mbje0xubwryvqzes43g95idn7vqoyt1bvok4vate0h22of3y5jsn59f5z1d9z7bhzmft0iebo2tk5zb1pxkvlxxht68gxb76sun6tjomll24ggi5gdmlt7qytnz9ht8sjrut5fwiv8ogkpw2n2dy2hz5x4q1pnmyao1eb75ho75jhwkphqmcqytytgoqn3b742qcf49yh7i1m1lypsx36ftqrj654grf 00:05:58.426 21:27:43 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:05:58.426 21:27:43 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:05:58.426 21:27:43 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:05:58.426 21:27:43 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:05:58.426 [2024-07-24 21:27:43.291947] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:05:58.426 [2024-07-24 21:27:43.292043] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61968 ] 00:05:58.426 { 00:05:58.426 "subsystems": [ 00:05:58.426 { 00:05:58.426 "subsystem": "bdev", 00:05:58.426 "config": [ 00:05:58.426 { 00:05:58.426 "params": { 00:05:58.426 "trtype": "pcie", 00:05:58.426 "traddr": "0000:00:10.0", 00:05:58.427 "name": "Nvme0" 00:05:58.427 }, 00:05:58.427 "method": "bdev_nvme_attach_controller" 00:05:58.427 }, 00:05:58.427 { 00:05:58.427 "method": "bdev_wait_for_examine" 00:05:58.427 } 00:05:58.427 ] 00:05:58.427 } 00:05:58.427 ] 00:05:58.427 } 00:05:58.685 [2024-07-24 21:27:43.429939] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.685 [2024-07-24 21:27:43.537794] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.685 [2024-07-24 21:27:43.612964] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:59.202  Copying: 4096/4096 [B] (average 4000 kBps) 00:05:59.202 00:05:59.202 21:27:44 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:05:59.202 21:27:44 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:05:59.202 21:27:44 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:05:59.202 21:27:44 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:05:59.202 { 00:05:59.202 "subsystems": [ 00:05:59.202 { 00:05:59.202 "subsystem": "bdev", 00:05:59.202 "config": [ 00:05:59.202 { 00:05:59.202 "params": { 00:05:59.202 "trtype": "pcie", 00:05:59.202 "traddr": "0000:00:10.0", 00:05:59.202 "name": "Nvme0" 00:05:59.202 }, 00:05:59.202 "method": "bdev_nvme_attach_controller" 00:05:59.202 }, 00:05:59.202 { 00:05:59.202 "method": "bdev_wait_for_examine" 00:05:59.202 } 00:05:59.202 ] 00:05:59.202 } 00:05:59.202 ] 00:05:59.202 } 00:05:59.202 [2024-07-24 21:27:44.066482] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:05:59.202 [2024-07-24 21:27:44.066589] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61989 ] 00:05:59.461 [2024-07-24 21:27:44.207745] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.461 [2024-07-24 21:27:44.360837] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.461 [2024-07-24 21:27:44.443399] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:59.978  Copying: 4096/4096 [B] (average 4000 kBps) 00:05:59.978 00:05:59.978 21:27:44 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:05:59.979 21:27:44 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ 5cmrn87qbqnm7yn1nj1ncnyfl274qt00bweqektjsanwang2dtkrboc1q28i5qaqsjeglf0k8u7ymbvjwh2t6mo5x0yeh2k0oxfzhwa8dawqodt6b0hhnn62z5cxhcw1revwca87xlx53pkv2jxpu55k7mayxozqv4upi7akte2ya4vzcc97ag2bg967mtlxeufkk6zebzbmyud29uvs8b3vv0ikmzqiapnk4pdwlryhd5hh5b5cup10zx9ds4xmj7dbnrkgg8fglwa7o3opx5b6o4swzadw0wsuhb5esd0reum74xufst1nz3sfmqfd55iwxt3b9ra55f7r7mcedis2e74woczrto699poo1jxlxiygxu2k8s415m5mev0ojkvw2mqi66hgiikvxw13qd0wcndkjcp49kmpz8sz21t2baa7f9t55e942j2n5ne31xamj16bv8gf7gbb7j66tk4j5t6w5u208ur19gblogsv7kxiou84ry1vk0yq41m82imm49ob2ih47v33fra9svo13wgxqwduxp3okvg2fh3gfdln8i9kx4cd36a0e6ch4pcrrguf8arn7xu1clf1tuqfdzstg669wi76m1q55yso5u748noazbridne1k4s6pydtqt2mvek8iae4nlpjkg9ltvii1kugbvknoywgzdock2fqq5rmgdzg07bzgank9fcf68onlsyjis6nwuzxtk30m5yweaxu9dzcnn3kjxhzy3u72nvvemidl4kcj70tgvo7otaiprdbv4am9bz1eshcbuormc3huubh7thhx4ditco0zic8grr9xb2tlcy225bjdrty7whj129qpxo3wdwufeipuhfb8mjz2gmlylfelt5yyfigeq4m6007zesrjxex8ton4rv07jssoxjsnleqxcrdnufnovx02ccd1wxhnb8qv3t7sk9sazae9p2riav13tmhlcmgjhn6p2sqkytotqtan9ta00hsxnz1ci308zx9fkvd6005vy1ta4o2uu0unybt5y7kajy1lb2a0bfxk9devv6thin5w6l1741h3s2xmuztpee3fvt6f6i302xh7kbbiwudf8n54tlw8kmhw45ipds3xfbxhucywtnt5nwz1go0nqn8gyfgozv1ptqucdl44935nu2g0bi6p7nkd5nbsyfmlz5ikf8x9dwdw0bg5w1yk9xu9gpt7rtkb60z17iudatxbaetuiojkelyq14mvuxgrmp32korrdlceff6ld5u2xaoiuj4ejeetkxbpgkizw9ej0ldyurg2be4d46sre5o1lu3smprzud0r26splfxqaccmh615vegugoeiybh3wp7786a6tpylr8xwgdz2x58pb0hd4uxmyfiy2cwyady0owl18u1c9d4lrrsqlg67vkp9tnx8baaczr9er0hsfim0zu1f7xivz0f5fcxiohjzknxdfwko3p7j44hdlem4g47gdu4vfy6jzii12rz8oc1kwb1yqwlq2qn8mmkv83xvq248iqf9o5ngvuzhgtxmj6og082fjtddjdtbssinbe93d4uljgzs126igpiv1cp9o05yhvbbvfunq7odj41n7daxs3gqulhvo4jawh1mf60661mndrh2v6y8lwgo17i2voy3ytf1s8na2we3ehfkamh2sqol57k5xkaauz5k1qdggm954omds0upvj4ewm8bipa6t3huf6h8l78dqgvuvgk975ajblr8k4pwl09shjhob2k5w5glhmtj63gd90x0zof0i6xtbddumhx8pqye9y3377tw2453mbwn4j6dc50oq0r0dru1ycqv95sx3fvklh6hwk0z05xq3fwwvdj57h7v6bm0n4rqs7rmb5j9ahvrwx0u8r69pn8v6njp10pp9by2h693e752p6jabbvfhfoyplnb262ycv22da34c27hq1q9a2w8mgjybojjcepgb8v88jrj63wwf9umsvur6nzhedpadpe6w8lx941g1yqzr4f6olm50gac13kbwvemeuay4ef6xfe9lm7w52fdfbqk7e938ulj4lmbmafukjixegdatevxgb5i998sa48umtc09qjnn2h6q7fsvasiyy4x26kshxn28zvu9kjqhwwtzettf985vprbo8hbeltab42d8bwhfjwda2ctc1b9izab3fg0l4o5suhi2twj2ydw469wgb8a35ek56g3l0obxuofus5mjy94xw7riupb6b2wv3yutsdszdiqn4nnid9nnrxgzl3byi87u9ldj53zapz88u72o3fc4unbn35jpr1vd1oaj2n5rwm6s7cwe2gxejcd0wau5pddsmd4l1w5b0l2xxxyu2qnh5n1x37qkpdsqh6xib1q5xfrxerk8420wt684dtr2ym087z7h1tqwixj4k8rsmor2bhuewfqhg0o5ws2gozd5nsiuoun6gcdr1s69vf2xt5hzefxlkyha72hjjrkt6d8jyxd2oe10wlvcuzj3jppn93rqsx7xpa4dww9lf6grr01vxrssvpq0xmcixb0fv3vx9nwx6d1su3pdimxekea02ixpcz6q2njflo3q950op3syl5xl55fv5gnzveh1ge3mrqgxessb2672bhgp8v8kx05xm4dmt9lgglapnrsgjpwysubacuxer0ftuj6mzg720uu2cw263j9qwg7qko7vomm232bssu5al2qu6g18aeec5uor9jjjbrdqpprlotmcba39eywdt2v3u0r1g2ho0dlv5e40at864cvyqp9md82nmz2am999djto24ibmpgcacqcq9kwwfesjy9hzt81g8x3uhmv6p0zhgmihlynjqahnmrfhv9gmx7ewgmfe7lymo3ijmbw069qfjj02n6t5mfz1yiz4cbaaf3d6wh0jxt03edvvm4mj10c6istecjs96f4q0qd5m3eg95003n349butt9utgoqk69g5qhvv7bv51j6t44h75efdo8soo2c488l0p3jbg0yg9mxnpc4wd3hmcyfuwj78q01qt0cqdd5iirpkzy0yt74r35r808pil9iny9znm5y3ojhsbucmhtpgr8d6tzen67w6bhuz0mzzrb3qyqticuzslh88756gpqmbwoblooetgtjibqlx53bpmnmqhorvt5c2zlym6dnogh43w0jp7g6clycmm0hkq7dr12nc09bev03u6tp46zxcmosgyf1uca0jlqfp40sacr8xrpk4pue9mt5qgqrwkcb3q12b77rwrw1z6t8r5nhebqj2vvvow7ls7asdll84l1fqltbbfqbz70ao4f7zxgdpidtv0xjq8uba7otvlgjg7plurluaspjuh0tvcspodh511448pvy7fj501gwd3y4whmfbng0anu1cmq535m63w3klafrfdk48c78nvdmtkeg8sts7csjjf6hvlhjecglanab3ouw6bppz243rlt4u4duwfgdhlqelhqlpp8dtli4ldkqjuny8o8tph8hbpvdmmp61jpj901s0r71qf3g0pqghmfxjjmka5rgq383foaxbqpbgihz7287t6at61ui032rta0v5on419wsopr9017f101c32pevc276tlxylqcz9hv73cb5mfoqmqri8t9e404f4p4kuart2k8y8zk5zjk15633deci0iotlp53nd6i1onswzvadjd1wfcs08xal3g05vaq7sji8cw9s2amntfb67sk3lahvufnydqqnmh5dqgz18obmeglne5wjqvlg45lmc5nbisuowddvycg3j83msl4jh0988igncb1vnostp8mb0ydx0kopxp4l2a9kauax8bxomdtlnpuomklnq785mbje0xubwryvqzes43g95idn7vqoyt1bvok4vate0h22of3y5jsn59f5z1d9z7bhzmft0iebo2tk5zb1pxkvlxxht68gxb76sun6tjomll24ggi5gdmlt7qytnz9ht8sjrut5fwiv8ogkpw2n2dy2hz5x4q1pnmyao1eb75ho75jhwkphqmcqytytgoqn3b742qcf49yh7i1m1lypsx36ftqrj654grf == \5\c\m\r\n\8\7\q\b\q\n\m\7\y\n\1\n\j\1\n\c\n\y\f\l\2\7\4\q\t\0\0\b\w\e\q\e\k\t\j\s\a\n\w\a\n\g\2\d\t\k\r\b\o\c\1\q\2\8\i\5\q\a\q\s\j\e\g\l\f\0\k\8\u\7\y\m\b\v\j\w\h\2\t\6\m\o\5\x\0\y\e\h\2\k\0\o\x\f\z\h\w\a\8\d\a\w\q\o\d\t\6\b\0\h\h\n\n\6\2\z\5\c\x\h\c\w\1\r\e\v\w\c\a\8\7\x\l\x\5\3\p\k\v\2\j\x\p\u\5\5\k\7\m\a\y\x\o\z\q\v\4\u\p\i\7\a\k\t\e\2\y\a\4\v\z\c\c\9\7\a\g\2\b\g\9\6\7\m\t\l\x\e\u\f\k\k\6\z\e\b\z\b\m\y\u\d\2\9\u\v\s\8\b\3\v\v\0\i\k\m\z\q\i\a\p\n\k\4\p\d\w\l\r\y\h\d\5\h\h\5\b\5\c\u\p\1\0\z\x\9\d\s\4\x\m\j\7\d\b\n\r\k\g\g\8\f\g\l\w\a\7\o\3\o\p\x\5\b\6\o\4\s\w\z\a\d\w\0\w\s\u\h\b\5\e\s\d\0\r\e\u\m\7\4\x\u\f\s\t\1\n\z\3\s\f\m\q\f\d\5\5\i\w\x\t\3\b\9\r\a\5\5\f\7\r\7\m\c\e\d\i\s\2\e\7\4\w\o\c\z\r\t\o\6\9\9\p\o\o\1\j\x\l\x\i\y\g\x\u\2\k\8\s\4\1\5\m\5\m\e\v\0\o\j\k\v\w\2\m\q\i\6\6\h\g\i\i\k\v\x\w\1\3\q\d\0\w\c\n\d\k\j\c\p\4\9\k\m\p\z\8\s\z\2\1\t\2\b\a\a\7\f\9\t\5\5\e\9\4\2\j\2\n\5\n\e\3\1\x\a\m\j\1\6\b\v\8\g\f\7\g\b\b\7\j\6\6\t\k\4\j\5\t\6\w\5\u\2\0\8\u\r\1\9\g\b\l\o\g\s\v\7\k\x\i\o\u\8\4\r\y\1\v\k\0\y\q\4\1\m\8\2\i\m\m\4\9\o\b\2\i\h\4\7\v\3\3\f\r\a\9\s\v\o\1\3\w\g\x\q\w\d\u\x\p\3\o\k\v\g\2\f\h\3\g\f\d\l\n\8\i\9\k\x\4\c\d\3\6\a\0\e\6\c\h\4\p\c\r\r\g\u\f\8\a\r\n\7\x\u\1\c\l\f\1\t\u\q\f\d\z\s\t\g\6\6\9\w\i\7\6\m\1\q\5\5\y\s\o\5\u\7\4\8\n\o\a\z\b\r\i\d\n\e\1\k\4\s\6\p\y\d\t\q\t\2\m\v\e\k\8\i\a\e\4\n\l\p\j\k\g\9\l\t\v\i\i\1\k\u\g\b\v\k\n\o\y\w\g\z\d\o\c\k\2\f\q\q\5\r\m\g\d\z\g\0\7\b\z\g\a\n\k\9\f\c\f\6\8\o\n\l\s\y\j\i\s\6\n\w\u\z\x\t\k\3\0\m\5\y\w\e\a\x\u\9\d\z\c\n\n\3\k\j\x\h\z\y\3\u\7\2\n\v\v\e\m\i\d\l\4\k\c\j\7\0\t\g\v\o\7\o\t\a\i\p\r\d\b\v\4\a\m\9\b\z\1\e\s\h\c\b\u\o\r\m\c\3\h\u\u\b\h\7\t\h\h\x\4\d\i\t\c\o\0\z\i\c\8\g\r\r\9\x\b\2\t\l\c\y\2\2\5\b\j\d\r\t\y\7\w\h\j\1\2\9\q\p\x\o\3\w\d\w\u\f\e\i\p\u\h\f\b\8\m\j\z\2\g\m\l\y\l\f\e\l\t\5\y\y\f\i\g\e\q\4\m\6\0\0\7\z\e\s\r\j\x\e\x\8\t\o\n\4\r\v\0\7\j\s\s\o\x\j\s\n\l\e\q\x\c\r\d\n\u\f\n\o\v\x\0\2\c\c\d\1\w\x\h\n\b\8\q\v\3\t\7\s\k\9\s\a\z\a\e\9\p\2\r\i\a\v\1\3\t\m\h\l\c\m\g\j\h\n\6\p\2\s\q\k\y\t\o\t\q\t\a\n\9\t\a\0\0\h\s\x\n\z\1\c\i\3\0\8\z\x\9\f\k\v\d\6\0\0\5\v\y\1\t\a\4\o\2\u\u\0\u\n\y\b\t\5\y\7\k\a\j\y\1\l\b\2\a\0\b\f\x\k\9\d\e\v\v\6\t\h\i\n\5\w\6\l\1\7\4\1\h\3\s\2\x\m\u\z\t\p\e\e\3\f\v\t\6\f\6\i\3\0\2\x\h\7\k\b\b\i\w\u\d\f\8\n\5\4\t\l\w\8\k\m\h\w\4\5\i\p\d\s\3\x\f\b\x\h\u\c\y\w\t\n\t\5\n\w\z\1\g\o\0\n\q\n\8\g\y\f\g\o\z\v\1\p\t\q\u\c\d\l\4\4\9\3\5\n\u\2\g\0\b\i\6\p\7\n\k\d\5\n\b\s\y\f\m\l\z\5\i\k\f\8\x\9\d\w\d\w\0\b\g\5\w\1\y\k\9\x\u\9\g\p\t\7\r\t\k\b\6\0\z\1\7\i\u\d\a\t\x\b\a\e\t\u\i\o\j\k\e\l\y\q\1\4\m\v\u\x\g\r\m\p\3\2\k\o\r\r\d\l\c\e\f\f\6\l\d\5\u\2\x\a\o\i\u\j\4\e\j\e\e\t\k\x\b\p\g\k\i\z\w\9\e\j\0\l\d\y\u\r\g\2\b\e\4\d\4\6\s\r\e\5\o\1\l\u\3\s\m\p\r\z\u\d\0\r\2\6\s\p\l\f\x\q\a\c\c\m\h\6\1\5\v\e\g\u\g\o\e\i\y\b\h\3\w\p\7\7\8\6\a\6\t\p\y\l\r\8\x\w\g\d\z\2\x\5\8\p\b\0\h\d\4\u\x\m\y\f\i\y\2\c\w\y\a\d\y\0\o\w\l\1\8\u\1\c\9\d\4\l\r\r\s\q\l\g\6\7\v\k\p\9\t\n\x\8\b\a\a\c\z\r\9\e\r\0\h\s\f\i\m\0\z\u\1\f\7\x\i\v\z\0\f\5\f\c\x\i\o\h\j\z\k\n\x\d\f\w\k\o\3\p\7\j\4\4\h\d\l\e\m\4\g\4\7\g\d\u\4\v\f\y\6\j\z\i\i\1\2\r\z\8\o\c\1\k\w\b\1\y\q\w\l\q\2\q\n\8\m\m\k\v\8\3\x\v\q\2\4\8\i\q\f\9\o\5\n\g\v\u\z\h\g\t\x\m\j\6\o\g\0\8\2\f\j\t\d\d\j\d\t\b\s\s\i\n\b\e\9\3\d\4\u\l\j\g\z\s\1\2\6\i\g\p\i\v\1\c\p\9\o\0\5\y\h\v\b\b\v\f\u\n\q\7\o\d\j\4\1\n\7\d\a\x\s\3\g\q\u\l\h\v\o\4\j\a\w\h\1\m\f\6\0\6\6\1\m\n\d\r\h\2\v\6\y\8\l\w\g\o\1\7\i\2\v\o\y\3\y\t\f\1\s\8\n\a\2\w\e\3\e\h\f\k\a\m\h\2\s\q\o\l\5\7\k\5\x\k\a\a\u\z\5\k\1\q\d\g\g\m\9\5\4\o\m\d\s\0\u\p\v\j\4\e\w\m\8\b\i\p\a\6\t\3\h\u\f\6\h\8\l\7\8\d\q\g\v\u\v\g\k\9\7\5\a\j\b\l\r\8\k\4\p\w\l\0\9\s\h\j\h\o\b\2\k\5\w\5\g\l\h\m\t\j\6\3\g\d\9\0\x\0\z\o\f\0\i\6\x\t\b\d\d\u\m\h\x\8\p\q\y\e\9\y\3\3\7\7\t\w\2\4\5\3\m\b\w\n\4\j\6\d\c\5\0\o\q\0\r\0\d\r\u\1\y\c\q\v\9\5\s\x\3\f\v\k\l\h\6\h\w\k\0\z\0\5\x\q\3\f\w\w\v\d\j\5\7\h\7\v\6\b\m\0\n\4\r\q\s\7\r\m\b\5\j\9\a\h\v\r\w\x\0\u\8\r\6\9\p\n\8\v\6\n\j\p\1\0\p\p\9\b\y\2\h\6\9\3\e\7\5\2\p\6\j\a\b\b\v\f\h\f\o\y\p\l\n\b\2\6\2\y\c\v\2\2\d\a\3\4\c\2\7\h\q\1\q\9\a\2\w\8\m\g\j\y\b\o\j\j\c\e\p\g\b\8\v\8\8\j\r\j\6\3\w\w\f\9\u\m\s\v\u\r\6\n\z\h\e\d\p\a\d\p\e\6\w\8\l\x\9\4\1\g\1\y\q\z\r\4\f\6\o\l\m\5\0\g\a\c\1\3\k\b\w\v\e\m\e\u\a\y\4\e\f\6\x\f\e\9\l\m\7\w\5\2\f\d\f\b\q\k\7\e\9\3\8\u\l\j\4\l\m\b\m\a\f\u\k\j\i\x\e\g\d\a\t\e\v\x\g\b\5\i\9\9\8\s\a\4\8\u\m\t\c\0\9\q\j\n\n\2\h\6\q\7\f\s\v\a\s\i\y\y\4\x\2\6\k\s\h\x\n\2\8\z\v\u\9\k\j\q\h\w\w\t\z\e\t\t\f\9\8\5\v\p\r\b\o\8\h\b\e\l\t\a\b\4\2\d\8\b\w\h\f\j\w\d\a\2\c\t\c\1\b\9\i\z\a\b\3\f\g\0\l\4\o\5\s\u\h\i\2\t\w\j\2\y\d\w\4\6\9\w\g\b\8\a\3\5\e\k\5\6\g\3\l\0\o\b\x\u\o\f\u\s\5\m\j\y\9\4\x\w\7\r\i\u\p\b\6\b\2\w\v\3\y\u\t\s\d\s\z\d\i\q\n\4\n\n\i\d\9\n\n\r\x\g\z\l\3\b\y\i\8\7\u\9\l\d\j\5\3\z\a\p\z\8\8\u\7\2\o\3\f\c\4\u\n\b\n\3\5\j\p\r\1\v\d\1\o\a\j\2\n\5\r\w\m\6\s\7\c\w\e\2\g\x\e\j\c\d\0\w\a\u\5\p\d\d\s\m\d\4\l\1\w\5\b\0\l\2\x\x\x\y\u\2\q\n\h\5\n\1\x\3\7\q\k\p\d\s\q\h\6\x\i\b\1\q\5\x\f\r\x\e\r\k\8\4\2\0\w\t\6\8\4\d\t\r\2\y\m\0\8\7\z\7\h\1\t\q\w\i\x\j\4\k\8\r\s\m\o\r\2\b\h\u\e\w\f\q\h\g\0\o\5\w\s\2\g\o\z\d\5\n\s\i\u\o\u\n\6\g\c\d\r\1\s\6\9\v\f\2\x\t\5\h\z\e\f\x\l\k\y\h\a\7\2\h\j\j\r\k\t\6\d\8\j\y\x\d\2\o\e\1\0\w\l\v\c\u\z\j\3\j\p\p\n\9\3\r\q\s\x\7\x\p\a\4\d\w\w\9\l\f\6\g\r\r\0\1\v\x\r\s\s\v\p\q\0\x\m\c\i\x\b\0\f\v\3\v\x\9\n\w\x\6\d\1\s\u\3\p\d\i\m\x\e\k\e\a\0\2\i\x\p\c\z\6\q\2\n\j\f\l\o\3\q\9\5\0\o\p\3\s\y\l\5\x\l\5\5\f\v\5\g\n\z\v\e\h\1\g\e\3\m\r\q\g\x\e\s\s\b\2\6\7\2\b\h\g\p\8\v\8\k\x\0\5\x\m\4\d\m\t\9\l\g\g\l\a\p\n\r\s\g\j\p\w\y\s\u\b\a\c\u\x\e\r\0\f\t\u\j\6\m\z\g\7\2\0\u\u\2\c\w\2\6\3\j\9\q\w\g\7\q\k\o\7\v\o\m\m\2\3\2\b\s\s\u\5\a\l\2\q\u\6\g\1\8\a\e\e\c\5\u\o\r\9\j\j\j\b\r\d\q\p\p\r\l\o\t\m\c\b\a\3\9\e\y\w\d\t\2\v\3\u\0\r\1\g\2\h\o\0\d\l\v\5\e\4\0\a\t\8\6\4\c\v\y\q\p\9\m\d\8\2\n\m\z\2\a\m\9\9\9\d\j\t\o\2\4\i\b\m\p\g\c\a\c\q\c\q\9\k\w\w\f\e\s\j\y\9\h\z\t\8\1\g\8\x\3\u\h\m\v\6\p\0\z\h\g\m\i\h\l\y\n\j\q\a\h\n\m\r\f\h\v\9\g\m\x\7\e\w\g\m\f\e\7\l\y\m\o\3\i\j\m\b\w\0\6\9\q\f\j\j\0\2\n\6\t\5\m\f\z\1\y\i\z\4\c\b\a\a\f\3\d\6\w\h\0\j\x\t\0\3\e\d\v\v\m\4\m\j\1\0\c\6\i\s\t\e\c\j\s\9\6\f\4\q\0\q\d\5\m\3\e\g\9\5\0\0\3\n\3\4\9\b\u\t\t\9\u\t\g\o\q\k\6\9\g\5\q\h\v\v\7\b\v\5\1\j\6\t\4\4\h\7\5\e\f\d\o\8\s\o\o\2\c\4\8\8\l\0\p\3\j\b\g\0\y\g\9\m\x\n\p\c\4\w\d\3\h\m\c\y\f\u\w\j\7\8\q\0\1\q\t\0\c\q\d\d\5\i\i\r\p\k\z\y\0\y\t\7\4\r\3\5\r\8\0\8\p\i\l\9\i\n\y\9\z\n\m\5\y\3\o\j\h\s\b\u\c\m\h\t\p\g\r\8\d\6\t\z\e\n\6\7\w\6\b\h\u\z\0\m\z\z\r\b\3\q\y\q\t\i\c\u\z\s\l\h\8\8\7\5\6\g\p\q\m\b\w\o\b\l\o\o\e\t\g\t\j\i\b\q\l\x\5\3\b\p\m\n\m\q\h\o\r\v\t\5\c\2\z\l\y\m\6\d\n\o\g\h\4\3\w\0\j\p\7\g\6\c\l\y\c\m\m\0\h\k\q\7\d\r\1\2\n\c\0\9\b\e\v\0\3\u\6\t\p\4\6\z\x\c\m\o\s\g\y\f\1\u\c\a\0\j\l\q\f\p\4\0\s\a\c\r\8\x\r\p\k\4\p\u\e\9\m\t\5\q\g\q\r\w\k\c\b\3\q\1\2\b\7\7\r\w\r\w\1\z\6\t\8\r\5\n\h\e\b\q\j\2\v\v\v\o\w\7\l\s\7\a\s\d\l\l\8\4\l\1\f\q\l\t\b\b\f\q\b\z\7\0\a\o\4\f\7\z\x\g\d\p\i\d\t\v\0\x\j\q\8\u\b\a\7\o\t\v\l\g\j\g\7\p\l\u\r\l\u\a\s\p\j\u\h\0\t\v\c\s\p\o\d\h\5\1\1\4\4\8\p\v\y\7\f\j\5\0\1\g\w\d\3\y\4\w\h\m\f\b\n\g\0\a\n\u\1\c\m\q\5\3\5\m\6\3\w\3\k\l\a\f\r\f\d\k\4\8\c\7\8\n\v\d\m\t\k\e\g\8\s\t\s\7\c\s\j\j\f\6\h\v\l\h\j\e\c\g\l\a\n\a\b\3\o\u\w\6\b\p\p\z\2\4\3\r\l\t\4\u\4\d\u\w\f\g\d\h\l\q\e\l\h\q\l\p\p\8\d\t\l\i\4\l\d\k\q\j\u\n\y\8\o\8\t\p\h\8\h\b\p\v\d\m\m\p\6\1\j\p\j\9\0\1\s\0\r\7\1\q\f\3\g\0\p\q\g\h\m\f\x\j\j\m\k\a\5\r\g\q\3\8\3\f\o\a\x\b\q\p\b\g\i\h\z\7\2\8\7\t\6\a\t\6\1\u\i\0\3\2\r\t\a\0\v\5\o\n\4\1\9\w\s\o\p\r\9\0\1\7\f\1\0\1\c\3\2\p\e\v\c\2\7\6\t\l\x\y\l\q\c\z\9\h\v\7\3\c\b\5\m\f\o\q\m\q\r\i\8\t\9\e\4\0\4\f\4\p\4\k\u\a\r\t\2\k\8\y\8\z\k\5\z\j\k\1\5\6\3\3\d\e\c\i\0\i\o\t\l\p\5\3\n\d\6\i\1\o\n\s\w\z\v\a\d\j\d\1\w\f\c\s\0\8\x\a\l\3\g\0\5\v\a\q\7\s\j\i\8\c\w\9\s\2\a\m\n\t\f\b\6\7\s\k\3\l\a\h\v\u\f\n\y\d\q\q\n\m\h\5\d\q\g\z\1\8\o\b\m\e\g\l\n\e\5\w\j\q\v\l\g\4\5\l\m\c\5\n\b\i\s\u\o\w\d\d\v\y\c\g\3\j\8\3\m\s\l\4\j\h\0\9\8\8\i\g\n\c\b\1\v\n\o\s\t\p\8\m\b\0\y\d\x\0\k\o\p\x\p\4\l\2\a\9\k\a\u\a\x\8\b\x\o\m\d\t\l\n\p\u\o\m\k\l\n\q\7\8\5\m\b\j\e\0\x\u\b\w\r\y\v\q\z\e\s\4\3\g\9\5\i\d\n\7\v\q\o\y\t\1\b\v\o\k\4\v\a\t\e\0\h\2\2\o\f\3\y\5\j\s\n\5\9\f\5\z\1\d\9\z\7\b\h\z\m\f\t\0\i\e\b\o\2\t\k\5\z\b\1\p\x\k\v\l\x\x\h\t\6\8\g\x\b\7\6\s\u\n\6\t\j\o\m\l\l\2\4\g\g\i\5\g\d\m\l\t\7\q\y\t\n\z\9\h\t\8\s\j\r\u\t\5\f\w\i\v\8\o\g\k\p\w\2\n\2\d\y\2\h\z\5\x\4\q\1\p\n\m\y\a\o\1\e\b\7\5\h\o\7\5\j\h\w\k\p\h\q\m\c\q\y\t\y\t\g\o\q\n\3\b\7\4\2\q\c\f\4\9\y\h\7\i\1\m\1\l\y\p\s\x\3\6\f\t\q\r\j\6\5\4\g\r\f ]] 00:05:59.979 ************************************ 00:05:59.979 END TEST dd_rw_offset 00:05:59.979 00:05:59.979 real 0m1.650s 00:05:59.979 user 0m1.132s 00:05:59.979 sys 0m0.773s 00:05:59.979 21:27:44 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:59.979 21:27:44 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:05:59.979 ************************************ 00:05:59.979 21:27:44 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:05:59.979 21:27:44 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:05:59.979 21:27:44 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:05:59.979 21:27:44 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:05:59.979 21:27:44 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:05:59.979 21:27:44 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:05:59.979 21:27:44 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:05:59.979 21:27:44 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:05:59.979 21:27:44 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:05:59.979 21:27:44 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:59.979 21:27:44 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:05:59.979 [2024-07-24 21:27:44.937126] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:05:59.979 [2024-07-24 21:27:44.937217] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62024 ] 00:05:59.979 { 00:05:59.979 "subsystems": [ 00:05:59.979 { 00:05:59.979 "subsystem": "bdev", 00:05:59.979 "config": [ 00:05:59.979 { 00:05:59.979 "params": { 00:05:59.979 "trtype": "pcie", 00:05:59.979 "traddr": "0000:00:10.0", 00:05:59.979 "name": "Nvme0" 00:05:59.979 }, 00:05:59.979 "method": "bdev_nvme_attach_controller" 00:05:59.979 }, 00:05:59.979 { 00:05:59.979 "method": "bdev_wait_for_examine" 00:05:59.979 } 00:05:59.979 ] 00:05:59.979 } 00:05:59.979 ] 00:05:59.979 } 00:06:00.238 [2024-07-24 21:27:45.074997] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.238 [2024-07-24 21:27:45.164279] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.238 [2024-07-24 21:27:45.236751] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:00.756  Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:00.756 00:06:00.756 21:27:45 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:00.756 ************************************ 00:06:00.756 END TEST spdk_dd_basic_rw 00:06:00.756 ************************************ 00:06:00.756 00:06:00.756 real 0m22.263s 00:06:00.756 user 0m15.987s 00:06:00.756 sys 0m8.813s 00:06:00.756 21:27:45 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:00.756 21:27:45 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:00.756 21:27:45 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:06:00.756 21:27:45 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:00.756 21:27:45 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:00.756 21:27:45 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:00.756 ************************************ 00:06:00.756 START TEST spdk_dd_posix 00:06:00.756 ************************************ 00:06:00.756 21:27:45 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:06:01.016 * Looking for test storage... 00:06:01.016 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:01.016 21:27:45 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:01.016 21:27:45 spdk_dd.spdk_dd_posix -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:01.016 21:27:45 spdk_dd.spdk_dd_posix -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:01.016 21:27:45 spdk_dd.spdk_dd_posix -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:01.016 21:27:45 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:01.016 21:27:45 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:01.016 21:27:45 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:01.016 21:27:45 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 00:06:01.016 21:27:45 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:01.016 21:27:45 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:06:01.016 21:27:45 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:06:01.016 21:27:45 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:06:01.016 21:27:45 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:06:01.016 21:27:45 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:01.016 21:27:45 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:01.016 21:27:45 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:06:01.016 21:27:45 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:06:01.016 * First test run, liburing in use 00:06:01.016 21:27:45 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:06:01.016 21:27:45 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:01.016 21:27:45 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:01.016 21:27:45 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:01.016 ************************************ 00:06:01.016 START TEST dd_flag_append 00:06:01.016 ************************************ 00:06:01.016 21:27:45 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1125 -- # append 00:06:01.016 21:27:45 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:06:01.016 21:27:45 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:06:01.016 21:27:45 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:06:01.016 21:27:45 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:06:01.016 21:27:45 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:06:01.016 21:27:45 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=v7mtj78bf5n9ktb2otby5adr7iummm5k 00:06:01.016 21:27:45 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:06:01.016 21:27:45 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:06:01.016 21:27:45 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:06:01.016 21:27:45 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=x6r30aq134my2w7ywk0rzfjindn2kdv3 00:06:01.016 21:27:45 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s v7mtj78bf5n9ktb2otby5adr7iummm5k 00:06:01.016 21:27:45 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s x6r30aq134my2w7ywk0rzfjindn2kdv3 00:06:01.016 21:27:45 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:06:01.016 [2024-07-24 21:27:45.885609] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:06:01.016 [2024-07-24 21:27:45.885729] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62088 ] 00:06:01.276 [2024-07-24 21:27:46.025331] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.276 [2024-07-24 21:27:46.167919] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.276 [2024-07-24 21:27:46.239363] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:01.794  Copying: 32/32 [B] (average 31 kBps) 00:06:01.794 00:06:01.794 21:27:46 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ x6r30aq134my2w7ywk0rzfjindn2kdv3v7mtj78bf5n9ktb2otby5adr7iummm5k == \x\6\r\3\0\a\q\1\3\4\m\y\2\w\7\y\w\k\0\r\z\f\j\i\n\d\n\2\k\d\v\3\v\7\m\t\j\7\8\b\f\5\n\9\k\t\b\2\o\t\b\y\5\a\d\r\7\i\u\m\m\m\5\k ]] 00:06:01.794 00:06:01.794 real 0m0.781s 00:06:01.794 user 0m0.478s 00:06:01.794 sys 0m0.368s 00:06:01.794 21:27:46 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:01.794 ************************************ 00:06:01.794 END TEST dd_flag_append 00:06:01.794 21:27:46 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:06:01.794 ************************************ 00:06:01.794 21:27:46 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:06:01.794 21:27:46 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:01.794 21:27:46 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:01.794 21:27:46 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:01.794 ************************************ 00:06:01.794 START TEST dd_flag_directory 00:06:01.794 ************************************ 00:06:01.794 21:27:46 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1125 -- # directory 00:06:01.794 21:27:46 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:01.794 21:27:46 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # local es=0 00:06:01.794 21:27:46 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:01.794 21:27:46 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:01.794 21:27:46 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:01.794 21:27:46 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:01.794 21:27:46 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:01.794 21:27:46 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:01.794 21:27:46 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:01.795 21:27:46 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:01.795 21:27:46 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:01.795 21:27:46 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:01.795 [2024-07-24 21:27:46.713192] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:06:01.795 [2024-07-24 21:27:46.713265] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62121 ] 00:06:02.054 [2024-07-24 21:27:46.844281] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.054 [2024-07-24 21:27:46.946595] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.054 [2024-07-24 21:27:47.022115] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:02.312 [2024-07-24 21:27:47.065175] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:02.312 [2024-07-24 21:27:47.065240] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:02.312 [2024-07-24 21:27:47.065284] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:02.312 [2024-07-24 21:27:47.223957] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:02.571 21:27:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # es=236 00:06:02.571 21:27:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:02.571 21:27:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@662 -- # es=108 00:06:02.571 21:27:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # case "$es" in 00:06:02.571 21:27:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@670 -- # es=1 00:06:02.571 21:27:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:02.571 21:27:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:02.571 21:27:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # local es=0 00:06:02.571 21:27:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:02.571 21:27:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:02.571 21:27:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:02.571 21:27:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:02.571 21:27:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:02.571 21:27:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:02.571 21:27:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:02.571 21:27:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:02.571 21:27:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:02.571 21:27:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:02.571 [2024-07-24 21:27:47.429071] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:06:02.571 [2024-07-24 21:27:47.429149] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62127 ] 00:06:02.571 [2024-07-24 21:27:47.561829] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.830 [2024-07-24 21:27:47.668006] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.830 [2024-07-24 21:27:47.745113] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:02.830 [2024-07-24 21:27:47.788414] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:02.830 [2024-07-24 21:27:47.788483] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:02.830 [2024-07-24 21:27:47.788511] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:03.089 [2024-07-24 21:27:47.952827] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:03.348 21:27:48 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # es=236 00:06:03.348 21:27:48 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:03.348 21:27:48 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@662 -- # es=108 00:06:03.348 21:27:48 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # case "$es" in 00:06:03.348 21:27:48 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@670 -- # es=1 00:06:03.348 21:27:48 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:03.348 00:06:03.348 real 0m1.448s 00:06:03.348 user 0m0.856s 00:06:03.348 sys 0m0.378s 00:06:03.348 21:27:48 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:03.348 21:27:48 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:06:03.348 ************************************ 00:06:03.348 END TEST dd_flag_directory 00:06:03.348 ************************************ 00:06:03.348 21:27:48 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:06:03.348 21:27:48 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:03.348 21:27:48 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:03.348 21:27:48 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:03.348 ************************************ 00:06:03.348 START TEST dd_flag_nofollow 00:06:03.348 ************************************ 00:06:03.348 21:27:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1125 -- # nofollow 00:06:03.348 21:27:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:03.348 21:27:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:03.348 21:27:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:03.349 21:27:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:03.349 21:27:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:03.349 21:27:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # local es=0 00:06:03.349 21:27:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:03.349 21:27:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:03.349 21:27:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:03.349 21:27:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:03.349 21:27:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:03.349 21:27:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:03.349 21:27:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:03.349 21:27:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:03.349 21:27:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:03.349 21:27:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:03.349 [2024-07-24 21:27:48.231903] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:06:03.349 [2024-07-24 21:27:48.232022] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62160 ] 00:06:03.607 [2024-07-24 21:27:48.371152] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.607 [2024-07-24 21:27:48.496070] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.608 [2024-07-24 21:27:48.570573] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:03.886 [2024-07-24 21:27:48.616254] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:03.886 [2024-07-24 21:27:48.616326] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:03.886 [2024-07-24 21:27:48.616342] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:03.886 [2024-07-24 21:27:48.778444] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:04.150 21:27:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # es=216 00:06:04.150 21:27:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:04.150 21:27:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@662 -- # es=88 00:06:04.150 21:27:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # case "$es" in 00:06:04.150 21:27:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@670 -- # es=1 00:06:04.150 21:27:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:04.150 21:27:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:04.150 21:27:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # local es=0 00:06:04.150 21:27:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:04.150 21:27:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:04.150 21:27:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:04.150 21:27:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:04.150 21:27:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:04.150 21:27:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:04.150 21:27:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:04.150 21:27:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:04.150 21:27:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:04.150 21:27:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:04.150 [2024-07-24 21:27:48.941346] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:06:04.150 [2024-07-24 21:27:48.941435] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62175 ] 00:06:04.150 [2024-07-24 21:27:49.070006] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.409 [2024-07-24 21:27:49.182750] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.409 [2024-07-24 21:27:49.260657] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:04.409 [2024-07-24 21:27:49.306438] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:04.409 [2024-07-24 21:27:49.306499] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:04.409 [2024-07-24 21:27:49.306515] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:04.668 [2024-07-24 21:27:49.471244] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:04.668 21:27:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # es=216 00:06:04.668 21:27:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:04.668 21:27:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@662 -- # es=88 00:06:04.668 21:27:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # case "$es" in 00:06:04.668 21:27:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@670 -- # es=1 00:06:04.668 21:27:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:04.668 21:27:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:06:04.668 21:27:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:06:04.668 21:27:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:06:04.668 21:27:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:04.668 [2024-07-24 21:27:49.655999] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:06:04.668 [2024-07-24 21:27:49.656117] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62183 ] 00:06:04.926 [2024-07-24 21:27:49.794026] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.926 [2024-07-24 21:27:49.898940] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.184 [2024-07-24 21:27:49.974904] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:05.443  Copying: 512/512 [B] (average 500 kBps) 00:06:05.443 00:06:05.443 21:27:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ oj1kc8yv3bhdinfqs5sv16hlqk59czy3htx0gv75dr2fpvvvg8lvz3o8r6gbqiynbsdzj2gvequt39mpoqs2h9ndiqfh25w4dfxs4p8kefnnxzp35hzo5md610efim6e9xi6s2rb7eorq3kj9kq1l1cyuqnbd1kj5tpi12jsxwmi2rsfgwi8s3elld8i22cs8o2wxjhzow66a0janhrakta2n0gbvvo8gchh4spzsz1wmg1r8b0fm4cjzzzkzpsul1vih5zieluybmbrke5j7cdmg0l84h3pw6ayu3hlw1xh6dvnmgwvcn4c8ndy7vnqzvh6cnyycffn6lqeabhx0i6ci6p00hfsy8ep5w14mi2rt31bivgh3ig7t9rp41uomuoyskadra1b4ihr8ynn5knjstjo2kv1urkfoxovuo9jh4rpkrm6nky817bcz4spvriet3hu1gst054r9fd0u4o7or3ncy85asn37d0udhzxdgkfhayqlx7jxvrsf1ix == \o\j\1\k\c\8\y\v\3\b\h\d\i\n\f\q\s\5\s\v\1\6\h\l\q\k\5\9\c\z\y\3\h\t\x\0\g\v\7\5\d\r\2\f\p\v\v\v\g\8\l\v\z\3\o\8\r\6\g\b\q\i\y\n\b\s\d\z\j\2\g\v\e\q\u\t\3\9\m\p\o\q\s\2\h\9\n\d\i\q\f\h\2\5\w\4\d\f\x\s\4\p\8\k\e\f\n\n\x\z\p\3\5\h\z\o\5\m\d\6\1\0\e\f\i\m\6\e\9\x\i\6\s\2\r\b\7\e\o\r\q\3\k\j\9\k\q\1\l\1\c\y\u\q\n\b\d\1\k\j\5\t\p\i\1\2\j\s\x\w\m\i\2\r\s\f\g\w\i\8\s\3\e\l\l\d\8\i\2\2\c\s\8\o\2\w\x\j\h\z\o\w\6\6\a\0\j\a\n\h\r\a\k\t\a\2\n\0\g\b\v\v\o\8\g\c\h\h\4\s\p\z\s\z\1\w\m\g\1\r\8\b\0\f\m\4\c\j\z\z\z\k\z\p\s\u\l\1\v\i\h\5\z\i\e\l\u\y\b\m\b\r\k\e\5\j\7\c\d\m\g\0\l\8\4\h\3\p\w\6\a\y\u\3\h\l\w\1\x\h\6\d\v\n\m\g\w\v\c\n\4\c\8\n\d\y\7\v\n\q\z\v\h\6\c\n\y\y\c\f\f\n\6\l\q\e\a\b\h\x\0\i\6\c\i\6\p\0\0\h\f\s\y\8\e\p\5\w\1\4\m\i\2\r\t\3\1\b\i\v\g\h\3\i\g\7\t\9\r\p\4\1\u\o\m\u\o\y\s\k\a\d\r\a\1\b\4\i\h\r\8\y\n\n\5\k\n\j\s\t\j\o\2\k\v\1\u\r\k\f\o\x\o\v\u\o\9\j\h\4\r\p\k\r\m\6\n\k\y\8\1\7\b\c\z\4\s\p\v\r\i\e\t\3\h\u\1\g\s\t\0\5\4\r\9\f\d\0\u\4\o\7\o\r\3\n\c\y\8\5\a\s\n\3\7\d\0\u\d\h\z\x\d\g\k\f\h\a\y\q\l\x\7\j\x\v\r\s\f\1\i\x ]] 00:06:05.443 00:06:05.443 real 0m2.133s 00:06:05.443 user 0m1.224s 00:06:05.443 sys 0m0.765s 00:06:05.443 21:27:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:05.443 21:27:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:06:05.443 ************************************ 00:06:05.443 END TEST dd_flag_nofollow 00:06:05.443 ************************************ 00:06:05.443 21:27:50 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:06:05.443 21:27:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:05.443 21:27:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:05.443 21:27:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:05.443 ************************************ 00:06:05.443 START TEST dd_flag_noatime 00:06:05.443 ************************************ 00:06:05.443 21:27:50 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1125 -- # noatime 00:06:05.443 21:27:50 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:06:05.443 21:27:50 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:06:05.443 21:27:50 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:06:05.443 21:27:50 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:06:05.443 21:27:50 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:06:05.443 21:27:50 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:05.443 21:27:50 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1721856470 00:06:05.443 21:27:50 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:05.443 21:27:50 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1721856470 00:06:05.443 21:27:50 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:06:06.820 21:27:51 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:06.820 [2024-07-24 21:27:51.443462] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:06:06.820 [2024-07-24 21:27:51.443560] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62225 ] 00:06:06.820 [2024-07-24 21:27:51.585276] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.820 [2024-07-24 21:27:51.706738] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.820 [2024-07-24 21:27:51.782617] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:07.337  Copying: 512/512 [B] (average 500 kBps) 00:06:07.337 00:06:07.337 21:27:52 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:07.337 21:27:52 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1721856470 )) 00:06:07.337 21:27:52 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:07.337 21:27:52 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1721856470 )) 00:06:07.337 21:27:52 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:07.337 [2024-07-24 21:27:52.165281] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:06:07.337 [2024-07-24 21:27:52.165399] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62244 ] 00:06:07.337 [2024-07-24 21:27:52.295925] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.595 [2024-07-24 21:27:52.396169] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.595 [2024-07-24 21:27:52.470743] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:07.853  Copying: 512/512 [B] (average 500 kBps) 00:06:07.853 00:06:07.853 21:27:52 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:07.853 21:27:52 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1721856472 )) 00:06:07.853 00:06:07.853 real 0m2.456s 00:06:07.853 user 0m0.820s 00:06:07.853 sys 0m0.779s 00:06:07.853 21:27:52 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:07.853 21:27:52 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:06:07.853 ************************************ 00:06:07.853 END TEST dd_flag_noatime 00:06:07.853 ************************************ 00:06:08.111 21:27:52 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:06:08.111 21:27:52 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:08.111 21:27:52 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:08.111 21:27:52 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:08.111 ************************************ 00:06:08.111 START TEST dd_flags_misc 00:06:08.111 ************************************ 00:06:08.111 21:27:52 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1125 -- # io 00:06:08.111 21:27:52 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:06:08.111 21:27:52 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:06:08.111 21:27:52 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:06:08.111 21:27:52 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:08.111 21:27:52 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:06:08.111 21:27:52 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:06:08.111 21:27:52 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:06:08.111 21:27:52 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:08.111 21:27:52 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:08.111 [2024-07-24 21:27:52.932700] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:06:08.111 [2024-07-24 21:27:52.932842] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62275 ] 00:06:08.111 [2024-07-24 21:27:53.069238] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.369 [2024-07-24 21:27:53.164656] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.369 [2024-07-24 21:27:53.237981] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:08.629  Copying: 512/512 [B] (average 500 kBps) 00:06:08.629 00:06:08.629 21:27:53 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 88h3wwgvj0hobph5zsu16zggl0pqbvzb118o06q1p7qofnab1azbrdlwxut7j0uvlnkx4q2uqxoc66rr473a2joqhs7l8f6mvkcb5m03y86a9ertbb40r4hja903qsbf4yv8kepxan3kmi0i7a0l7czh5j6szfho98cn65hnu989u6we9zoyzmpuq2252ns4lc72s1zdz5ra8edqno80lvcx0b9t0k1hbs0nzqw3dnqrjmvn1rpx2tq313xtn42u3pku3btoqp34hdyoy8kv2v4enbzwvrsuenra24sp1ocssstztyh6547a8ee9yz8q59hmu9nyzmerg59fwx7wyk4w7b0v1dee5ftx1i66cl9pcna94mk68fognf6sy7ynxnzgjapel5nvs6hmh6lwvykwwtnupzy0ozk6uzx00lrjqtiu3ukncv0vfy9tiz0olmiwpcmml4wvtew12k3t8jtob58n6jlop6pf1e8m0gdmtdwju64eun4ws8t4v40k == \8\8\h\3\w\w\g\v\j\0\h\o\b\p\h\5\z\s\u\1\6\z\g\g\l\0\p\q\b\v\z\b\1\1\8\o\0\6\q\1\p\7\q\o\f\n\a\b\1\a\z\b\r\d\l\w\x\u\t\7\j\0\u\v\l\n\k\x\4\q\2\u\q\x\o\c\6\6\r\r\4\7\3\a\2\j\o\q\h\s\7\l\8\f\6\m\v\k\c\b\5\m\0\3\y\8\6\a\9\e\r\t\b\b\4\0\r\4\h\j\a\9\0\3\q\s\b\f\4\y\v\8\k\e\p\x\a\n\3\k\m\i\0\i\7\a\0\l\7\c\z\h\5\j\6\s\z\f\h\o\9\8\c\n\6\5\h\n\u\9\8\9\u\6\w\e\9\z\o\y\z\m\p\u\q\2\2\5\2\n\s\4\l\c\7\2\s\1\z\d\z\5\r\a\8\e\d\q\n\o\8\0\l\v\c\x\0\b\9\t\0\k\1\h\b\s\0\n\z\q\w\3\d\n\q\r\j\m\v\n\1\r\p\x\2\t\q\3\1\3\x\t\n\4\2\u\3\p\k\u\3\b\t\o\q\p\3\4\h\d\y\o\y\8\k\v\2\v\4\e\n\b\z\w\v\r\s\u\e\n\r\a\2\4\s\p\1\o\c\s\s\s\t\z\t\y\h\6\5\4\7\a\8\e\e\9\y\z\8\q\5\9\h\m\u\9\n\y\z\m\e\r\g\5\9\f\w\x\7\w\y\k\4\w\7\b\0\v\1\d\e\e\5\f\t\x\1\i\6\6\c\l\9\p\c\n\a\9\4\m\k\6\8\f\o\g\n\f\6\s\y\7\y\n\x\n\z\g\j\a\p\e\l\5\n\v\s\6\h\m\h\6\l\w\v\y\k\w\w\t\n\u\p\z\y\0\o\z\k\6\u\z\x\0\0\l\r\j\q\t\i\u\3\u\k\n\c\v\0\v\f\y\9\t\i\z\0\o\l\m\i\w\p\c\m\m\l\4\w\v\t\e\w\1\2\k\3\t\8\j\t\o\b\5\8\n\6\j\l\o\p\6\p\f\1\e\8\m\0\g\d\m\t\d\w\j\u\6\4\e\u\n\4\w\s\8\t\4\v\4\0\k ]] 00:06:08.629 21:27:53 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:08.629 21:27:53 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:08.629 [2024-07-24 21:27:53.609792] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:06:08.629 [2024-07-24 21:27:53.609909] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62290 ] 00:06:08.887 [2024-07-24 21:27:53.746727] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.887 [2024-07-24 21:27:53.843862] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.144 [2024-07-24 21:27:53.917446] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:09.403  Copying: 512/512 [B] (average 500 kBps) 00:06:09.403 00:06:09.403 21:27:54 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 88h3wwgvj0hobph5zsu16zggl0pqbvzb118o06q1p7qofnab1azbrdlwxut7j0uvlnkx4q2uqxoc66rr473a2joqhs7l8f6mvkcb5m03y86a9ertbb40r4hja903qsbf4yv8kepxan3kmi0i7a0l7czh5j6szfho98cn65hnu989u6we9zoyzmpuq2252ns4lc72s1zdz5ra8edqno80lvcx0b9t0k1hbs0nzqw3dnqrjmvn1rpx2tq313xtn42u3pku3btoqp34hdyoy8kv2v4enbzwvrsuenra24sp1ocssstztyh6547a8ee9yz8q59hmu9nyzmerg59fwx7wyk4w7b0v1dee5ftx1i66cl9pcna94mk68fognf6sy7ynxnzgjapel5nvs6hmh6lwvykwwtnupzy0ozk6uzx00lrjqtiu3ukncv0vfy9tiz0olmiwpcmml4wvtew12k3t8jtob58n6jlop6pf1e8m0gdmtdwju64eun4ws8t4v40k == \8\8\h\3\w\w\g\v\j\0\h\o\b\p\h\5\z\s\u\1\6\z\g\g\l\0\p\q\b\v\z\b\1\1\8\o\0\6\q\1\p\7\q\o\f\n\a\b\1\a\z\b\r\d\l\w\x\u\t\7\j\0\u\v\l\n\k\x\4\q\2\u\q\x\o\c\6\6\r\r\4\7\3\a\2\j\o\q\h\s\7\l\8\f\6\m\v\k\c\b\5\m\0\3\y\8\6\a\9\e\r\t\b\b\4\0\r\4\h\j\a\9\0\3\q\s\b\f\4\y\v\8\k\e\p\x\a\n\3\k\m\i\0\i\7\a\0\l\7\c\z\h\5\j\6\s\z\f\h\o\9\8\c\n\6\5\h\n\u\9\8\9\u\6\w\e\9\z\o\y\z\m\p\u\q\2\2\5\2\n\s\4\l\c\7\2\s\1\z\d\z\5\r\a\8\e\d\q\n\o\8\0\l\v\c\x\0\b\9\t\0\k\1\h\b\s\0\n\z\q\w\3\d\n\q\r\j\m\v\n\1\r\p\x\2\t\q\3\1\3\x\t\n\4\2\u\3\p\k\u\3\b\t\o\q\p\3\4\h\d\y\o\y\8\k\v\2\v\4\e\n\b\z\w\v\r\s\u\e\n\r\a\2\4\s\p\1\o\c\s\s\s\t\z\t\y\h\6\5\4\7\a\8\e\e\9\y\z\8\q\5\9\h\m\u\9\n\y\z\m\e\r\g\5\9\f\w\x\7\w\y\k\4\w\7\b\0\v\1\d\e\e\5\f\t\x\1\i\6\6\c\l\9\p\c\n\a\9\4\m\k\6\8\f\o\g\n\f\6\s\y\7\y\n\x\n\z\g\j\a\p\e\l\5\n\v\s\6\h\m\h\6\l\w\v\y\k\w\w\t\n\u\p\z\y\0\o\z\k\6\u\z\x\0\0\l\r\j\q\t\i\u\3\u\k\n\c\v\0\v\f\y\9\t\i\z\0\o\l\m\i\w\p\c\m\m\l\4\w\v\t\e\w\1\2\k\3\t\8\j\t\o\b\5\8\n\6\j\l\o\p\6\p\f\1\e\8\m\0\g\d\m\t\d\w\j\u\6\4\e\u\n\4\w\s\8\t\4\v\4\0\k ]] 00:06:09.403 21:27:54 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:09.403 21:27:54 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:09.403 [2024-07-24 21:27:54.322231] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:06:09.403 [2024-07-24 21:27:54.322900] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62299 ] 00:06:09.662 [2024-07-24 21:27:54.469370] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.662 [2024-07-24 21:27:54.600769] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.920 [2024-07-24 21:27:54.681194] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:10.178  Copying: 512/512 [B] (average 125 kBps) 00:06:10.179 00:06:10.179 21:27:55 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 88h3wwgvj0hobph5zsu16zggl0pqbvzb118o06q1p7qofnab1azbrdlwxut7j0uvlnkx4q2uqxoc66rr473a2joqhs7l8f6mvkcb5m03y86a9ertbb40r4hja903qsbf4yv8kepxan3kmi0i7a0l7czh5j6szfho98cn65hnu989u6we9zoyzmpuq2252ns4lc72s1zdz5ra8edqno80lvcx0b9t0k1hbs0nzqw3dnqrjmvn1rpx2tq313xtn42u3pku3btoqp34hdyoy8kv2v4enbzwvrsuenra24sp1ocssstztyh6547a8ee9yz8q59hmu9nyzmerg59fwx7wyk4w7b0v1dee5ftx1i66cl9pcna94mk68fognf6sy7ynxnzgjapel5nvs6hmh6lwvykwwtnupzy0ozk6uzx00lrjqtiu3ukncv0vfy9tiz0olmiwpcmml4wvtew12k3t8jtob58n6jlop6pf1e8m0gdmtdwju64eun4ws8t4v40k == \8\8\h\3\w\w\g\v\j\0\h\o\b\p\h\5\z\s\u\1\6\z\g\g\l\0\p\q\b\v\z\b\1\1\8\o\0\6\q\1\p\7\q\o\f\n\a\b\1\a\z\b\r\d\l\w\x\u\t\7\j\0\u\v\l\n\k\x\4\q\2\u\q\x\o\c\6\6\r\r\4\7\3\a\2\j\o\q\h\s\7\l\8\f\6\m\v\k\c\b\5\m\0\3\y\8\6\a\9\e\r\t\b\b\4\0\r\4\h\j\a\9\0\3\q\s\b\f\4\y\v\8\k\e\p\x\a\n\3\k\m\i\0\i\7\a\0\l\7\c\z\h\5\j\6\s\z\f\h\o\9\8\c\n\6\5\h\n\u\9\8\9\u\6\w\e\9\z\o\y\z\m\p\u\q\2\2\5\2\n\s\4\l\c\7\2\s\1\z\d\z\5\r\a\8\e\d\q\n\o\8\0\l\v\c\x\0\b\9\t\0\k\1\h\b\s\0\n\z\q\w\3\d\n\q\r\j\m\v\n\1\r\p\x\2\t\q\3\1\3\x\t\n\4\2\u\3\p\k\u\3\b\t\o\q\p\3\4\h\d\y\o\y\8\k\v\2\v\4\e\n\b\z\w\v\r\s\u\e\n\r\a\2\4\s\p\1\o\c\s\s\s\t\z\t\y\h\6\5\4\7\a\8\e\e\9\y\z\8\q\5\9\h\m\u\9\n\y\z\m\e\r\g\5\9\f\w\x\7\w\y\k\4\w\7\b\0\v\1\d\e\e\5\f\t\x\1\i\6\6\c\l\9\p\c\n\a\9\4\m\k\6\8\f\o\g\n\f\6\s\y\7\y\n\x\n\z\g\j\a\p\e\l\5\n\v\s\6\h\m\h\6\l\w\v\y\k\w\w\t\n\u\p\z\y\0\o\z\k\6\u\z\x\0\0\l\r\j\q\t\i\u\3\u\k\n\c\v\0\v\f\y\9\t\i\z\0\o\l\m\i\w\p\c\m\m\l\4\w\v\t\e\w\1\2\k\3\t\8\j\t\o\b\5\8\n\6\j\l\o\p\6\p\f\1\e\8\m\0\g\d\m\t\d\w\j\u\6\4\e\u\n\4\w\s\8\t\4\v\4\0\k ]] 00:06:10.179 21:27:55 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:10.179 21:27:55 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:10.179 [2024-07-24 21:27:55.088360] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:06:10.179 [2024-07-24 21:27:55.088473] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62309 ] 00:06:10.437 [2024-07-24 21:27:55.225263] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.437 [2024-07-24 21:27:55.349315] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.437 [2024-07-24 21:27:55.425968] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:10.952  Copying: 512/512 [B] (average 166 kBps) 00:06:10.952 00:06:10.952 21:27:55 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 88h3wwgvj0hobph5zsu16zggl0pqbvzb118o06q1p7qofnab1azbrdlwxut7j0uvlnkx4q2uqxoc66rr473a2joqhs7l8f6mvkcb5m03y86a9ertbb40r4hja903qsbf4yv8kepxan3kmi0i7a0l7czh5j6szfho98cn65hnu989u6we9zoyzmpuq2252ns4lc72s1zdz5ra8edqno80lvcx0b9t0k1hbs0nzqw3dnqrjmvn1rpx2tq313xtn42u3pku3btoqp34hdyoy8kv2v4enbzwvrsuenra24sp1ocssstztyh6547a8ee9yz8q59hmu9nyzmerg59fwx7wyk4w7b0v1dee5ftx1i66cl9pcna94mk68fognf6sy7ynxnzgjapel5nvs6hmh6lwvykwwtnupzy0ozk6uzx00lrjqtiu3ukncv0vfy9tiz0olmiwpcmml4wvtew12k3t8jtob58n6jlop6pf1e8m0gdmtdwju64eun4ws8t4v40k == \8\8\h\3\w\w\g\v\j\0\h\o\b\p\h\5\z\s\u\1\6\z\g\g\l\0\p\q\b\v\z\b\1\1\8\o\0\6\q\1\p\7\q\o\f\n\a\b\1\a\z\b\r\d\l\w\x\u\t\7\j\0\u\v\l\n\k\x\4\q\2\u\q\x\o\c\6\6\r\r\4\7\3\a\2\j\o\q\h\s\7\l\8\f\6\m\v\k\c\b\5\m\0\3\y\8\6\a\9\e\r\t\b\b\4\0\r\4\h\j\a\9\0\3\q\s\b\f\4\y\v\8\k\e\p\x\a\n\3\k\m\i\0\i\7\a\0\l\7\c\z\h\5\j\6\s\z\f\h\o\9\8\c\n\6\5\h\n\u\9\8\9\u\6\w\e\9\z\o\y\z\m\p\u\q\2\2\5\2\n\s\4\l\c\7\2\s\1\z\d\z\5\r\a\8\e\d\q\n\o\8\0\l\v\c\x\0\b\9\t\0\k\1\h\b\s\0\n\z\q\w\3\d\n\q\r\j\m\v\n\1\r\p\x\2\t\q\3\1\3\x\t\n\4\2\u\3\p\k\u\3\b\t\o\q\p\3\4\h\d\y\o\y\8\k\v\2\v\4\e\n\b\z\w\v\r\s\u\e\n\r\a\2\4\s\p\1\o\c\s\s\s\t\z\t\y\h\6\5\4\7\a\8\e\e\9\y\z\8\q\5\9\h\m\u\9\n\y\z\m\e\r\g\5\9\f\w\x\7\w\y\k\4\w\7\b\0\v\1\d\e\e\5\f\t\x\1\i\6\6\c\l\9\p\c\n\a\9\4\m\k\6\8\f\o\g\n\f\6\s\y\7\y\n\x\n\z\g\j\a\p\e\l\5\n\v\s\6\h\m\h\6\l\w\v\y\k\w\w\t\n\u\p\z\y\0\o\z\k\6\u\z\x\0\0\l\r\j\q\t\i\u\3\u\k\n\c\v\0\v\f\y\9\t\i\z\0\o\l\m\i\w\p\c\m\m\l\4\w\v\t\e\w\1\2\k\3\t\8\j\t\o\b\5\8\n\6\j\l\o\p\6\p\f\1\e\8\m\0\g\d\m\t\d\w\j\u\6\4\e\u\n\4\w\s\8\t\4\v\4\0\k ]] 00:06:10.952 21:27:55 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:10.952 21:27:55 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:06:10.952 21:27:55 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:06:10.952 21:27:55 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:06:10.952 21:27:55 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:10.952 21:27:55 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:10.952 [2024-07-24 21:27:55.829503] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:06:10.952 [2024-07-24 21:27:55.829648] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62324 ] 00:06:11.211 [2024-07-24 21:27:55.965392] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.211 [2024-07-24 21:27:56.055694] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.211 [2024-07-24 21:27:56.129194] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:11.482  Copying: 512/512 [B] (average 500 kBps) 00:06:11.482 00:06:11.482 21:27:56 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ dtcewdyuoo5ddd32oryraa3ho3yrci60ix7ucq83y2ypbgxfk6yf9tyyfsqlsaemmc3e4epq2nr9t1a1hwnrappi5i8l1merzzd0ahpo7gjhpm37b9pukmlrtrvk1zy2vzdxselz2i6mx0bg5om5aqdbgd9wshtg6ukqkkukk5d1qpljf6y4cn6z2p1a4dify4pzzfo7x6y9ylg7t5a2vmt3t0wh9e4mmxrnu6u6em65la8d0qkwxjaxheh6y236vts4qbm6r2u2vmptzayjruz0t8jmg1w2p36zvqz89ys5ewin35upiy0c145hv6roi73tdfpltxj2j91he16l0qn8x3tps5unsnigewaoolxgpofn8vbhpkbj36ekvayyp7iul71iybnc7gt21vrlsqe6r0hdsie351sichwesoytbinxmsc8jor3kf8sj7ans30cuo5tjlhkldf3gmdeqqtol7u1rm7pl39a7r6gz946ix9s2rqwn0s355rq29dq == \d\t\c\e\w\d\y\u\o\o\5\d\d\d\3\2\o\r\y\r\a\a\3\h\o\3\y\r\c\i\6\0\i\x\7\u\c\q\8\3\y\2\y\p\b\g\x\f\k\6\y\f\9\t\y\y\f\s\q\l\s\a\e\m\m\c\3\e\4\e\p\q\2\n\r\9\t\1\a\1\h\w\n\r\a\p\p\i\5\i\8\l\1\m\e\r\z\z\d\0\a\h\p\o\7\g\j\h\p\m\3\7\b\9\p\u\k\m\l\r\t\r\v\k\1\z\y\2\v\z\d\x\s\e\l\z\2\i\6\m\x\0\b\g\5\o\m\5\a\q\d\b\g\d\9\w\s\h\t\g\6\u\k\q\k\k\u\k\k\5\d\1\q\p\l\j\f\6\y\4\c\n\6\z\2\p\1\a\4\d\i\f\y\4\p\z\z\f\o\7\x\6\y\9\y\l\g\7\t\5\a\2\v\m\t\3\t\0\w\h\9\e\4\m\m\x\r\n\u\6\u\6\e\m\6\5\l\a\8\d\0\q\k\w\x\j\a\x\h\e\h\6\y\2\3\6\v\t\s\4\q\b\m\6\r\2\u\2\v\m\p\t\z\a\y\j\r\u\z\0\t\8\j\m\g\1\w\2\p\3\6\z\v\q\z\8\9\y\s\5\e\w\i\n\3\5\u\p\i\y\0\c\1\4\5\h\v\6\r\o\i\7\3\t\d\f\p\l\t\x\j\2\j\9\1\h\e\1\6\l\0\q\n\8\x\3\t\p\s\5\u\n\s\n\i\g\e\w\a\o\o\l\x\g\p\o\f\n\8\v\b\h\p\k\b\j\3\6\e\k\v\a\y\y\p\7\i\u\l\7\1\i\y\b\n\c\7\g\t\2\1\v\r\l\s\q\e\6\r\0\h\d\s\i\e\3\5\1\s\i\c\h\w\e\s\o\y\t\b\i\n\x\m\s\c\8\j\o\r\3\k\f\8\s\j\7\a\n\s\3\0\c\u\o\5\t\j\l\h\k\l\d\f\3\g\m\d\e\q\q\t\o\l\7\u\1\r\m\7\p\l\3\9\a\7\r\6\g\z\9\4\6\i\x\9\s\2\r\q\w\n\0\s\3\5\5\r\q\2\9\d\q ]] 00:06:11.482 21:27:56 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:11.482 21:27:56 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:11.746 [2024-07-24 21:27:56.507762] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:06:11.746 [2024-07-24 21:27:56.507880] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62333 ] 00:06:11.746 [2024-07-24 21:27:56.646556] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.003 [2024-07-24 21:27:56.774906] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.003 [2024-07-24 21:27:56.857158] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:12.261  Copying: 512/512 [B] (average 500 kBps) 00:06:12.261 00:06:12.262 21:27:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ dtcewdyuoo5ddd32oryraa3ho3yrci60ix7ucq83y2ypbgxfk6yf9tyyfsqlsaemmc3e4epq2nr9t1a1hwnrappi5i8l1merzzd0ahpo7gjhpm37b9pukmlrtrvk1zy2vzdxselz2i6mx0bg5om5aqdbgd9wshtg6ukqkkukk5d1qpljf6y4cn6z2p1a4dify4pzzfo7x6y9ylg7t5a2vmt3t0wh9e4mmxrnu6u6em65la8d0qkwxjaxheh6y236vts4qbm6r2u2vmptzayjruz0t8jmg1w2p36zvqz89ys5ewin35upiy0c145hv6roi73tdfpltxj2j91he16l0qn8x3tps5unsnigewaoolxgpofn8vbhpkbj36ekvayyp7iul71iybnc7gt21vrlsqe6r0hdsie351sichwesoytbinxmsc8jor3kf8sj7ans30cuo5tjlhkldf3gmdeqqtol7u1rm7pl39a7r6gz946ix9s2rqwn0s355rq29dq == \d\t\c\e\w\d\y\u\o\o\5\d\d\d\3\2\o\r\y\r\a\a\3\h\o\3\y\r\c\i\6\0\i\x\7\u\c\q\8\3\y\2\y\p\b\g\x\f\k\6\y\f\9\t\y\y\f\s\q\l\s\a\e\m\m\c\3\e\4\e\p\q\2\n\r\9\t\1\a\1\h\w\n\r\a\p\p\i\5\i\8\l\1\m\e\r\z\z\d\0\a\h\p\o\7\g\j\h\p\m\3\7\b\9\p\u\k\m\l\r\t\r\v\k\1\z\y\2\v\z\d\x\s\e\l\z\2\i\6\m\x\0\b\g\5\o\m\5\a\q\d\b\g\d\9\w\s\h\t\g\6\u\k\q\k\k\u\k\k\5\d\1\q\p\l\j\f\6\y\4\c\n\6\z\2\p\1\a\4\d\i\f\y\4\p\z\z\f\o\7\x\6\y\9\y\l\g\7\t\5\a\2\v\m\t\3\t\0\w\h\9\e\4\m\m\x\r\n\u\6\u\6\e\m\6\5\l\a\8\d\0\q\k\w\x\j\a\x\h\e\h\6\y\2\3\6\v\t\s\4\q\b\m\6\r\2\u\2\v\m\p\t\z\a\y\j\r\u\z\0\t\8\j\m\g\1\w\2\p\3\6\z\v\q\z\8\9\y\s\5\e\w\i\n\3\5\u\p\i\y\0\c\1\4\5\h\v\6\r\o\i\7\3\t\d\f\p\l\t\x\j\2\j\9\1\h\e\1\6\l\0\q\n\8\x\3\t\p\s\5\u\n\s\n\i\g\e\w\a\o\o\l\x\g\p\o\f\n\8\v\b\h\p\k\b\j\3\6\e\k\v\a\y\y\p\7\i\u\l\7\1\i\y\b\n\c\7\g\t\2\1\v\r\l\s\q\e\6\r\0\h\d\s\i\e\3\5\1\s\i\c\h\w\e\s\o\y\t\b\i\n\x\m\s\c\8\j\o\r\3\k\f\8\s\j\7\a\n\s\3\0\c\u\o\5\t\j\l\h\k\l\d\f\3\g\m\d\e\q\q\t\o\l\7\u\1\r\m\7\p\l\3\9\a\7\r\6\g\z\9\4\6\i\x\9\s\2\r\q\w\n\0\s\3\5\5\r\q\2\9\d\q ]] 00:06:12.262 21:27:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:12.262 21:27:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:12.262 [2024-07-24 21:27:57.257430] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:06:12.262 [2024-07-24 21:27:57.257538] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62343 ] 00:06:12.520 [2024-07-24 21:27:57.396283] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.520 [2024-07-24 21:27:57.503027] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.779 [2024-07-24 21:27:57.573990] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:13.038  Copying: 512/512 [B] (average 166 kBps) 00:06:13.038 00:06:13.038 21:27:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ dtcewdyuoo5ddd32oryraa3ho3yrci60ix7ucq83y2ypbgxfk6yf9tyyfsqlsaemmc3e4epq2nr9t1a1hwnrappi5i8l1merzzd0ahpo7gjhpm37b9pukmlrtrvk1zy2vzdxselz2i6mx0bg5om5aqdbgd9wshtg6ukqkkukk5d1qpljf6y4cn6z2p1a4dify4pzzfo7x6y9ylg7t5a2vmt3t0wh9e4mmxrnu6u6em65la8d0qkwxjaxheh6y236vts4qbm6r2u2vmptzayjruz0t8jmg1w2p36zvqz89ys5ewin35upiy0c145hv6roi73tdfpltxj2j91he16l0qn8x3tps5unsnigewaoolxgpofn8vbhpkbj36ekvayyp7iul71iybnc7gt21vrlsqe6r0hdsie351sichwesoytbinxmsc8jor3kf8sj7ans30cuo5tjlhkldf3gmdeqqtol7u1rm7pl39a7r6gz946ix9s2rqwn0s355rq29dq == \d\t\c\e\w\d\y\u\o\o\5\d\d\d\3\2\o\r\y\r\a\a\3\h\o\3\y\r\c\i\6\0\i\x\7\u\c\q\8\3\y\2\y\p\b\g\x\f\k\6\y\f\9\t\y\y\f\s\q\l\s\a\e\m\m\c\3\e\4\e\p\q\2\n\r\9\t\1\a\1\h\w\n\r\a\p\p\i\5\i\8\l\1\m\e\r\z\z\d\0\a\h\p\o\7\g\j\h\p\m\3\7\b\9\p\u\k\m\l\r\t\r\v\k\1\z\y\2\v\z\d\x\s\e\l\z\2\i\6\m\x\0\b\g\5\o\m\5\a\q\d\b\g\d\9\w\s\h\t\g\6\u\k\q\k\k\u\k\k\5\d\1\q\p\l\j\f\6\y\4\c\n\6\z\2\p\1\a\4\d\i\f\y\4\p\z\z\f\o\7\x\6\y\9\y\l\g\7\t\5\a\2\v\m\t\3\t\0\w\h\9\e\4\m\m\x\r\n\u\6\u\6\e\m\6\5\l\a\8\d\0\q\k\w\x\j\a\x\h\e\h\6\y\2\3\6\v\t\s\4\q\b\m\6\r\2\u\2\v\m\p\t\z\a\y\j\r\u\z\0\t\8\j\m\g\1\w\2\p\3\6\z\v\q\z\8\9\y\s\5\e\w\i\n\3\5\u\p\i\y\0\c\1\4\5\h\v\6\r\o\i\7\3\t\d\f\p\l\t\x\j\2\j\9\1\h\e\1\6\l\0\q\n\8\x\3\t\p\s\5\u\n\s\n\i\g\e\w\a\o\o\l\x\g\p\o\f\n\8\v\b\h\p\k\b\j\3\6\e\k\v\a\y\y\p\7\i\u\l\7\1\i\y\b\n\c\7\g\t\2\1\v\r\l\s\q\e\6\r\0\h\d\s\i\e\3\5\1\s\i\c\h\w\e\s\o\y\t\b\i\n\x\m\s\c\8\j\o\r\3\k\f\8\s\j\7\a\n\s\3\0\c\u\o\5\t\j\l\h\k\l\d\f\3\g\m\d\e\q\q\t\o\l\7\u\1\r\m\7\p\l\3\9\a\7\r\6\g\z\9\4\6\i\x\9\s\2\r\q\w\n\0\s\3\5\5\r\q\2\9\d\q ]] 00:06:13.038 21:27:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:13.038 21:27:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:13.038 [2024-07-24 21:27:58.006114] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:06:13.038 [2024-07-24 21:27:58.006229] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62358 ] 00:06:13.297 [2024-07-24 21:27:58.143427] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.297 [2024-07-24 21:27:58.244330] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.556 [2024-07-24 21:27:58.321006] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:13.815  Copying: 512/512 [B] (average 250 kBps) 00:06:13.816 00:06:13.816 21:27:58 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ dtcewdyuoo5ddd32oryraa3ho3yrci60ix7ucq83y2ypbgxfk6yf9tyyfsqlsaemmc3e4epq2nr9t1a1hwnrappi5i8l1merzzd0ahpo7gjhpm37b9pukmlrtrvk1zy2vzdxselz2i6mx0bg5om5aqdbgd9wshtg6ukqkkukk5d1qpljf6y4cn6z2p1a4dify4pzzfo7x6y9ylg7t5a2vmt3t0wh9e4mmxrnu6u6em65la8d0qkwxjaxheh6y236vts4qbm6r2u2vmptzayjruz0t8jmg1w2p36zvqz89ys5ewin35upiy0c145hv6roi73tdfpltxj2j91he16l0qn8x3tps5unsnigewaoolxgpofn8vbhpkbj36ekvayyp7iul71iybnc7gt21vrlsqe6r0hdsie351sichwesoytbinxmsc8jor3kf8sj7ans30cuo5tjlhkldf3gmdeqqtol7u1rm7pl39a7r6gz946ix9s2rqwn0s355rq29dq == \d\t\c\e\w\d\y\u\o\o\5\d\d\d\3\2\o\r\y\r\a\a\3\h\o\3\y\r\c\i\6\0\i\x\7\u\c\q\8\3\y\2\y\p\b\g\x\f\k\6\y\f\9\t\y\y\f\s\q\l\s\a\e\m\m\c\3\e\4\e\p\q\2\n\r\9\t\1\a\1\h\w\n\r\a\p\p\i\5\i\8\l\1\m\e\r\z\z\d\0\a\h\p\o\7\g\j\h\p\m\3\7\b\9\p\u\k\m\l\r\t\r\v\k\1\z\y\2\v\z\d\x\s\e\l\z\2\i\6\m\x\0\b\g\5\o\m\5\a\q\d\b\g\d\9\w\s\h\t\g\6\u\k\q\k\k\u\k\k\5\d\1\q\p\l\j\f\6\y\4\c\n\6\z\2\p\1\a\4\d\i\f\y\4\p\z\z\f\o\7\x\6\y\9\y\l\g\7\t\5\a\2\v\m\t\3\t\0\w\h\9\e\4\m\m\x\r\n\u\6\u\6\e\m\6\5\l\a\8\d\0\q\k\w\x\j\a\x\h\e\h\6\y\2\3\6\v\t\s\4\q\b\m\6\r\2\u\2\v\m\p\t\z\a\y\j\r\u\z\0\t\8\j\m\g\1\w\2\p\3\6\z\v\q\z\8\9\y\s\5\e\w\i\n\3\5\u\p\i\y\0\c\1\4\5\h\v\6\r\o\i\7\3\t\d\f\p\l\t\x\j\2\j\9\1\h\e\1\6\l\0\q\n\8\x\3\t\p\s\5\u\n\s\n\i\g\e\w\a\o\o\l\x\g\p\o\f\n\8\v\b\h\p\k\b\j\3\6\e\k\v\a\y\y\p\7\i\u\l\7\1\i\y\b\n\c\7\g\t\2\1\v\r\l\s\q\e\6\r\0\h\d\s\i\e\3\5\1\s\i\c\h\w\e\s\o\y\t\b\i\n\x\m\s\c\8\j\o\r\3\k\f\8\s\j\7\a\n\s\3\0\c\u\o\5\t\j\l\h\k\l\d\f\3\g\m\d\e\q\q\t\o\l\7\u\1\r\m\7\p\l\3\9\a\7\r\6\g\z\9\4\6\i\x\9\s\2\r\q\w\n\0\s\3\5\5\r\q\2\9\d\q ]] 00:06:13.816 00:06:13.816 real 0m5.806s 00:06:13.816 user 0m3.374s 00:06:13.816 sys 0m2.972s 00:06:13.816 21:27:58 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:13.816 21:27:58 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:06:13.816 ************************************ 00:06:13.816 END TEST dd_flags_misc 00:06:13.816 ************************************ 00:06:13.816 21:27:58 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:06:13.816 21:27:58 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:06:13.816 * Second test run, disabling liburing, forcing AIO 00:06:13.816 21:27:58 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:06:13.816 21:27:58 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:06:13.816 21:27:58 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:13.816 21:27:58 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:13.816 21:27:58 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:13.816 ************************************ 00:06:13.816 START TEST dd_flag_append_forced_aio 00:06:13.816 ************************************ 00:06:13.816 21:27:58 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1125 -- # append 00:06:13.816 21:27:58 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:06:13.816 21:27:58 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:06:13.816 21:27:58 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:06:13.816 21:27:58 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:13.816 21:27:58 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:13.816 21:27:58 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=6t49u1hjqi5jw3bb31u830e4fa1asgdz 00:06:13.816 21:27:58 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:06:13.816 21:27:58 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:13.816 21:27:58 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:13.816 21:27:58 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=7m7yimh14n4ydu9799btagh36rjae0mp 00:06:13.816 21:27:58 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s 6t49u1hjqi5jw3bb31u830e4fa1asgdz 00:06:13.816 21:27:58 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s 7m7yimh14n4ydu9799btagh36rjae0mp 00:06:13.816 21:27:58 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:06:13.816 [2024-07-24 21:27:58.806975] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:06:13.816 [2024-07-24 21:27:58.807087] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62392 ] 00:06:14.074 [2024-07-24 21:27:58.949103] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.332 [2024-07-24 21:27:59.079841] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.332 [2024-07-24 21:27:59.158508] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:14.591  Copying: 32/32 [B] (average 31 kBps) 00:06:14.591 00:06:14.591 21:27:59 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ 7m7yimh14n4ydu9799btagh36rjae0mp6t49u1hjqi5jw3bb31u830e4fa1asgdz == \7\m\7\y\i\m\h\1\4\n\4\y\d\u\9\7\9\9\b\t\a\g\h\3\6\r\j\a\e\0\m\p\6\t\4\9\u\1\h\j\q\i\5\j\w\3\b\b\3\1\u\8\3\0\e\4\f\a\1\a\s\g\d\z ]] 00:06:14.591 00:06:14.591 real 0m0.803s 00:06:14.591 user 0m0.463s 00:06:14.591 sys 0m0.210s 00:06:14.591 21:27:59 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:14.591 ************************************ 00:06:14.591 END TEST dd_flag_append_forced_aio 00:06:14.591 ************************************ 00:06:14.591 21:27:59 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:14.850 21:27:59 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:06:14.850 21:27:59 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:14.850 21:27:59 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:14.850 21:27:59 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:14.850 ************************************ 00:06:14.850 START TEST dd_flag_directory_forced_aio 00:06:14.850 ************************************ 00:06:14.850 21:27:59 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1125 -- # directory 00:06:14.850 21:27:59 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:14.850 21:27:59 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:06:14.850 21:27:59 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:14.850 21:27:59 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:14.850 21:27:59 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:14.850 21:27:59 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:14.850 21:27:59 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:14.850 21:27:59 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:14.850 21:27:59 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:14.850 21:27:59 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:14.850 21:27:59 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:14.850 21:27:59 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:14.850 [2024-07-24 21:27:59.665703] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:06:14.850 [2024-07-24 21:27:59.665827] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62418 ] 00:06:14.850 [2024-07-24 21:27:59.802384] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.110 [2024-07-24 21:27:59.905768] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.110 [2024-07-24 21:27:59.979395] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:15.110 [2024-07-24 21:28:00.021904] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:15.110 [2024-07-24 21:28:00.021983] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:15.110 [2024-07-24 21:28:00.022009] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:15.369 [2024-07-24 21:28:00.179787] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:15.369 21:28:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # es=236 00:06:15.369 21:28:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:15.369 21:28:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@662 -- # es=108 00:06:15.369 21:28:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:06:15.369 21:28:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:06:15.369 21:28:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:15.369 21:28:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:15.369 21:28:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:06:15.369 21:28:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:15.369 21:28:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:15.369 21:28:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:15.369 21:28:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:15.369 21:28:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:15.369 21:28:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:15.369 21:28:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:15.369 21:28:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:15.369 21:28:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:15.369 21:28:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:15.628 [2024-07-24 21:28:00.392939] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:06:15.628 [2024-07-24 21:28:00.393058] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62428 ] 00:06:15.628 [2024-07-24 21:28:00.532361] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.887 [2024-07-24 21:28:00.629650] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.887 [2024-07-24 21:28:00.703218] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:15.887 [2024-07-24 21:28:00.748412] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:15.887 [2024-07-24 21:28:00.748486] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:15.887 [2024-07-24 21:28:00.748507] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:16.147 [2024-07-24 21:28:00.907718] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:16.147 21:28:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # es=236 00:06:16.147 21:28:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:16.147 21:28:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@662 -- # es=108 00:06:16.147 21:28:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:06:16.147 21:28:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:06:16.147 21:28:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:16.147 00:06:16.147 real 0m1.415s 00:06:16.147 user 0m0.828s 00:06:16.147 sys 0m0.376s 00:06:16.147 ************************************ 00:06:16.147 END TEST dd_flag_directory_forced_aio 00:06:16.147 ************************************ 00:06:16.147 21:28:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:16.147 21:28:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:16.147 21:28:01 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:06:16.147 21:28:01 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:16.147 21:28:01 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:16.147 21:28:01 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:16.147 ************************************ 00:06:16.147 START TEST dd_flag_nofollow_forced_aio 00:06:16.147 ************************************ 00:06:16.147 21:28:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1125 -- # nofollow 00:06:16.147 21:28:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:16.147 21:28:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:16.147 21:28:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:16.148 21:28:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:16.148 21:28:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:16.148 21:28:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:06:16.148 21:28:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:16.148 21:28:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:16.148 21:28:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:16.148 21:28:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:16.148 21:28:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:16.148 21:28:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:16.148 21:28:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:16.148 21:28:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:16.148 21:28:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:16.148 21:28:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:16.148 [2024-07-24 21:28:01.143744] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:06:16.148 [2024-07-24 21:28:01.143854] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62462 ] 00:06:16.407 [2024-07-24 21:28:01.281029] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.407 [2024-07-24 21:28:01.376970] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.666 [2024-07-24 21:28:01.454303] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:16.666 [2024-07-24 21:28:01.502868] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:16.666 [2024-07-24 21:28:01.502932] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:16.666 [2024-07-24 21:28:01.502947] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:16.924 [2024-07-24 21:28:01.665875] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:16.924 21:28:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # es=216 00:06:16.924 21:28:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:16.924 21:28:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@662 -- # es=88 00:06:16.925 21:28:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:06:16.925 21:28:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:06:16.925 21:28:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:16.925 21:28:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:16.925 21:28:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:06:16.925 21:28:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:16.925 21:28:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:16.925 21:28:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:16.925 21:28:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:16.925 21:28:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:16.925 21:28:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:16.925 21:28:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:16.925 21:28:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:16.925 21:28:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:16.925 21:28:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:16.925 [2024-07-24 21:28:01.886006] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:06:16.925 [2024-07-24 21:28:01.886112] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62477 ] 00:06:17.184 [2024-07-24 21:28:02.024378] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.184 [2024-07-24 21:28:02.118363] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.442 [2024-07-24 21:28:02.191021] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:17.442 [2024-07-24 21:28:02.233716] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:17.442 [2024-07-24 21:28:02.233795] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:17.442 [2024-07-24 21:28:02.233826] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:17.442 [2024-07-24 21:28:02.395631] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:17.701 21:28:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # es=216 00:06:17.701 21:28:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:17.701 21:28:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@662 -- # es=88 00:06:17.701 21:28:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:06:17.701 21:28:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:06:17.701 21:28:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:17.701 21:28:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:06:17.701 21:28:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:17.701 21:28:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:17.701 21:28:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:17.701 [2024-07-24 21:28:02.605254] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:06:17.701 [2024-07-24 21:28:02.605372] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62483 ] 00:06:17.960 [2024-07-24 21:28:02.742062] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.960 [2024-07-24 21:28:02.852801] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.960 [2024-07-24 21:28:02.925647] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:18.478  Copying: 512/512 [B] (average 500 kBps) 00:06:18.478 00:06:18.478 21:28:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ qd7osk1rmr2ocnhixafer4sj31piloi3lhn7o8r79g4lg0pqrs84m46h78opw59vqa3smjzaqtqg6koff282jt7sr4v1u6ajiwjiemiyb0s9q02jivg2dxpaluc7cj991ljiiyi6k2bhu1cl7enn8ohv0px3f97jboxhc7js885au8seiecvg0m2dd582rzmfsgoxen5wqmkdg8jgc8xytnpevsr29lrw20more3s5r64z0969xsglz3cil4e22dkhn5i4pabamk0tmpy3ctgmz9h2sd9vbuo5ncrofe89gv5jm695bi4sx2uvu9pkf7t5um5cbj77wj97mqvzyam8jcbnme30q7abq7zoab4g9bt6w0eelz5qyksp2f7vv5nd1335imjesav71ut9vq0j7roihp9y3ftdk2jmeyzxxs5q77jqjz4f32grw30shecjpp9vnzp9ztgauil8aavvkij3obzayopjgohbq9wgl67l3hznqgq9imtmv4d30x == \q\d\7\o\s\k\1\r\m\r\2\o\c\n\h\i\x\a\f\e\r\4\s\j\3\1\p\i\l\o\i\3\l\h\n\7\o\8\r\7\9\g\4\l\g\0\p\q\r\s\8\4\m\4\6\h\7\8\o\p\w\5\9\v\q\a\3\s\m\j\z\a\q\t\q\g\6\k\o\f\f\2\8\2\j\t\7\s\r\4\v\1\u\6\a\j\i\w\j\i\e\m\i\y\b\0\s\9\q\0\2\j\i\v\g\2\d\x\p\a\l\u\c\7\c\j\9\9\1\l\j\i\i\y\i\6\k\2\b\h\u\1\c\l\7\e\n\n\8\o\h\v\0\p\x\3\f\9\7\j\b\o\x\h\c\7\j\s\8\8\5\a\u\8\s\e\i\e\c\v\g\0\m\2\d\d\5\8\2\r\z\m\f\s\g\o\x\e\n\5\w\q\m\k\d\g\8\j\g\c\8\x\y\t\n\p\e\v\s\r\2\9\l\r\w\2\0\m\o\r\e\3\s\5\r\6\4\z\0\9\6\9\x\s\g\l\z\3\c\i\l\4\e\2\2\d\k\h\n\5\i\4\p\a\b\a\m\k\0\t\m\p\y\3\c\t\g\m\z\9\h\2\s\d\9\v\b\u\o\5\n\c\r\o\f\e\8\9\g\v\5\j\m\6\9\5\b\i\4\s\x\2\u\v\u\9\p\k\f\7\t\5\u\m\5\c\b\j\7\7\w\j\9\7\m\q\v\z\y\a\m\8\j\c\b\n\m\e\3\0\q\7\a\b\q\7\z\o\a\b\4\g\9\b\t\6\w\0\e\e\l\z\5\q\y\k\s\p\2\f\7\v\v\5\n\d\1\3\3\5\i\m\j\e\s\a\v\7\1\u\t\9\v\q\0\j\7\r\o\i\h\p\9\y\3\f\t\d\k\2\j\m\e\y\z\x\x\s\5\q\7\7\j\q\j\z\4\f\3\2\g\r\w\3\0\s\h\e\c\j\p\p\9\v\n\z\p\9\z\t\g\a\u\i\l\8\a\a\v\v\k\i\j\3\o\b\z\a\y\o\p\j\g\o\h\b\q\9\w\g\l\6\7\l\3\h\z\n\q\g\q\9\i\m\t\m\v\4\d\3\0\x ]] 00:06:18.478 00:06:18.478 real 0m2.201s 00:06:18.478 user 0m1.303s 00:06:18.478 sys 0m0.566s 00:06:18.478 ************************************ 00:06:18.478 END TEST dd_flag_nofollow_forced_aio 00:06:18.478 ************************************ 00:06:18.478 21:28:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:18.478 21:28:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:18.478 21:28:03 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:06:18.478 21:28:03 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:18.478 21:28:03 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:18.478 21:28:03 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:18.478 ************************************ 00:06:18.478 START TEST dd_flag_noatime_forced_aio 00:06:18.478 ************************************ 00:06:18.478 21:28:03 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1125 -- # noatime 00:06:18.478 21:28:03 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:06:18.478 21:28:03 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:06:18.478 21:28:03 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:06:18.478 21:28:03 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:18.478 21:28:03 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:18.478 21:28:03 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:18.478 21:28:03 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1721856482 00:06:18.478 21:28:03 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:18.478 21:28:03 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1721856483 00:06:18.478 21:28:03 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:06:19.414 21:28:04 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:19.673 [2024-07-24 21:28:04.414756] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:06:19.673 [2024-07-24 21:28:04.414878] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62525 ] 00:06:19.673 [2024-07-24 21:28:04.555185] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.932 [2024-07-24 21:28:04.680853] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.932 [2024-07-24 21:28:04.754802] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:20.191  Copying: 512/512 [B] (average 500 kBps) 00:06:20.191 00:06:20.191 21:28:05 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:20.191 21:28:05 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1721856482 )) 00:06:20.191 21:28:05 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:20.191 21:28:05 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1721856483 )) 00:06:20.191 21:28:05 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:20.449 [2024-07-24 21:28:05.195015] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:06:20.449 [2024-07-24 21:28:05.195178] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62542 ] 00:06:20.449 [2024-07-24 21:28:05.334941] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.449 [2024-07-24 21:28:05.448415] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.816 [2024-07-24 21:28:05.521352] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:21.074  Copying: 512/512 [B] (average 500 kBps) 00:06:21.074 00:06:21.074 21:28:05 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:21.075 21:28:05 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1721856485 )) 00:06:21.075 00:06:21.075 real 0m2.573s 00:06:21.075 user 0m0.919s 00:06:21.075 sys 0m0.407s 00:06:21.075 ************************************ 00:06:21.075 END TEST dd_flag_noatime_forced_aio 00:06:21.075 ************************************ 00:06:21.075 21:28:05 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:21.075 21:28:05 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:21.075 21:28:05 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:06:21.075 21:28:05 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:21.075 21:28:05 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:21.075 21:28:05 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:21.075 ************************************ 00:06:21.075 START TEST dd_flags_misc_forced_aio 00:06:21.075 ************************************ 00:06:21.075 21:28:05 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1125 -- # io 00:06:21.075 21:28:05 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:06:21.075 21:28:05 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:06:21.075 21:28:05 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:06:21.075 21:28:05 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:21.075 21:28:05 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:06:21.075 21:28:05 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:21.075 21:28:05 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:21.075 21:28:05 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:21.075 21:28:05 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:21.075 [2024-07-24 21:28:06.022252] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:06:21.075 [2024-07-24 21:28:06.022360] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62575 ] 00:06:21.334 [2024-07-24 21:28:06.159481] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.334 [2024-07-24 21:28:06.268314] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.593 [2024-07-24 21:28:06.341809] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:21.853  Copying: 512/512 [B] (average 500 kBps) 00:06:21.853 00:06:21.853 21:28:06 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ zrinqs31jfec695mgzoxa2zvvhmmf18q8a484ovw4wciwxv1nyfsnkvmu7legkpatbwy58958uqksmhov59euobaqv6it75b4jwb6a2csmwssllpatgddyx7bom277p2jgmf77bp1glno3tonnnfcl0px9iujez665pyuru4lwca47s93uuwqyjmjlsnaj0c981fjitiskx94zym53kw8q61d3u7jldss16zyx9qni0hanse7ojaggjeh3pzv5kwwx84oxnk1wjtv0hdr49ycp8e9oe4yjicd8754wnu0ccgb4n7o9lpoaothmsu28vkfkphg1x8a0fi0gzufypp5ddabccdr05ddm2ypdkl1xghjvgmv55iwdadygvktp65gp5zqutln8u686q9wr8g52u3mdzk8buv5j16bqmwsvz5zbdoxknk2pfsyiolqs5q9mg8h1c7p783ao8kfgdheqx0kwkenmwc96rbbs578usxp0asofehw2hfm1oglr41 == \z\r\i\n\q\s\3\1\j\f\e\c\6\9\5\m\g\z\o\x\a\2\z\v\v\h\m\m\f\1\8\q\8\a\4\8\4\o\v\w\4\w\c\i\w\x\v\1\n\y\f\s\n\k\v\m\u\7\l\e\g\k\p\a\t\b\w\y\5\8\9\5\8\u\q\k\s\m\h\o\v\5\9\e\u\o\b\a\q\v\6\i\t\7\5\b\4\j\w\b\6\a\2\c\s\m\w\s\s\l\l\p\a\t\g\d\d\y\x\7\b\o\m\2\7\7\p\2\j\g\m\f\7\7\b\p\1\g\l\n\o\3\t\o\n\n\n\f\c\l\0\p\x\9\i\u\j\e\z\6\6\5\p\y\u\r\u\4\l\w\c\a\4\7\s\9\3\u\u\w\q\y\j\m\j\l\s\n\a\j\0\c\9\8\1\f\j\i\t\i\s\k\x\9\4\z\y\m\5\3\k\w\8\q\6\1\d\3\u\7\j\l\d\s\s\1\6\z\y\x\9\q\n\i\0\h\a\n\s\e\7\o\j\a\g\g\j\e\h\3\p\z\v\5\k\w\w\x\8\4\o\x\n\k\1\w\j\t\v\0\h\d\r\4\9\y\c\p\8\e\9\o\e\4\y\j\i\c\d\8\7\5\4\w\n\u\0\c\c\g\b\4\n\7\o\9\l\p\o\a\o\t\h\m\s\u\2\8\v\k\f\k\p\h\g\1\x\8\a\0\f\i\0\g\z\u\f\y\p\p\5\d\d\a\b\c\c\d\r\0\5\d\d\m\2\y\p\d\k\l\1\x\g\h\j\v\g\m\v\5\5\i\w\d\a\d\y\g\v\k\t\p\6\5\g\p\5\z\q\u\t\l\n\8\u\6\8\6\q\9\w\r\8\g\5\2\u\3\m\d\z\k\8\b\u\v\5\j\1\6\b\q\m\w\s\v\z\5\z\b\d\o\x\k\n\k\2\p\f\s\y\i\o\l\q\s\5\q\9\m\g\8\h\1\c\7\p\7\8\3\a\o\8\k\f\g\d\h\e\q\x\0\k\w\k\e\n\m\w\c\9\6\r\b\b\s\5\7\8\u\s\x\p\0\a\s\o\f\e\h\w\2\h\f\m\1\o\g\l\r\4\1 ]] 00:06:21.853 21:28:06 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:21.853 21:28:06 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:21.853 [2024-07-24 21:28:06.764588] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:06:21.853 [2024-07-24 21:28:06.764696] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62577 ] 00:06:22.112 [2024-07-24 21:28:06.899501] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.112 [2024-07-24 21:28:07.014112] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.112 [2024-07-24 21:28:07.087937] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:22.630  Copying: 512/512 [B] (average 500 kBps) 00:06:22.630 00:06:22.630 21:28:07 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ zrinqs31jfec695mgzoxa2zvvhmmf18q8a484ovw4wciwxv1nyfsnkvmu7legkpatbwy58958uqksmhov59euobaqv6it75b4jwb6a2csmwssllpatgddyx7bom277p2jgmf77bp1glno3tonnnfcl0px9iujez665pyuru4lwca47s93uuwqyjmjlsnaj0c981fjitiskx94zym53kw8q61d3u7jldss16zyx9qni0hanse7ojaggjeh3pzv5kwwx84oxnk1wjtv0hdr49ycp8e9oe4yjicd8754wnu0ccgb4n7o9lpoaothmsu28vkfkphg1x8a0fi0gzufypp5ddabccdr05ddm2ypdkl1xghjvgmv55iwdadygvktp65gp5zqutln8u686q9wr8g52u3mdzk8buv5j16bqmwsvz5zbdoxknk2pfsyiolqs5q9mg8h1c7p783ao8kfgdheqx0kwkenmwc96rbbs578usxp0asofehw2hfm1oglr41 == \z\r\i\n\q\s\3\1\j\f\e\c\6\9\5\m\g\z\o\x\a\2\z\v\v\h\m\m\f\1\8\q\8\a\4\8\4\o\v\w\4\w\c\i\w\x\v\1\n\y\f\s\n\k\v\m\u\7\l\e\g\k\p\a\t\b\w\y\5\8\9\5\8\u\q\k\s\m\h\o\v\5\9\e\u\o\b\a\q\v\6\i\t\7\5\b\4\j\w\b\6\a\2\c\s\m\w\s\s\l\l\p\a\t\g\d\d\y\x\7\b\o\m\2\7\7\p\2\j\g\m\f\7\7\b\p\1\g\l\n\o\3\t\o\n\n\n\f\c\l\0\p\x\9\i\u\j\e\z\6\6\5\p\y\u\r\u\4\l\w\c\a\4\7\s\9\3\u\u\w\q\y\j\m\j\l\s\n\a\j\0\c\9\8\1\f\j\i\t\i\s\k\x\9\4\z\y\m\5\3\k\w\8\q\6\1\d\3\u\7\j\l\d\s\s\1\6\z\y\x\9\q\n\i\0\h\a\n\s\e\7\o\j\a\g\g\j\e\h\3\p\z\v\5\k\w\w\x\8\4\o\x\n\k\1\w\j\t\v\0\h\d\r\4\9\y\c\p\8\e\9\o\e\4\y\j\i\c\d\8\7\5\4\w\n\u\0\c\c\g\b\4\n\7\o\9\l\p\o\a\o\t\h\m\s\u\2\8\v\k\f\k\p\h\g\1\x\8\a\0\f\i\0\g\z\u\f\y\p\p\5\d\d\a\b\c\c\d\r\0\5\d\d\m\2\y\p\d\k\l\1\x\g\h\j\v\g\m\v\5\5\i\w\d\a\d\y\g\v\k\t\p\6\5\g\p\5\z\q\u\t\l\n\8\u\6\8\6\q\9\w\r\8\g\5\2\u\3\m\d\z\k\8\b\u\v\5\j\1\6\b\q\m\w\s\v\z\5\z\b\d\o\x\k\n\k\2\p\f\s\y\i\o\l\q\s\5\q\9\m\g\8\h\1\c\7\p\7\8\3\a\o\8\k\f\g\d\h\e\q\x\0\k\w\k\e\n\m\w\c\9\6\r\b\b\s\5\7\8\u\s\x\p\0\a\s\o\f\e\h\w\2\h\f\m\1\o\g\l\r\4\1 ]] 00:06:22.630 21:28:07 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:22.630 21:28:07 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:22.630 [2024-07-24 21:28:07.498536] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:06:22.630 [2024-07-24 21:28:07.498662] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62590 ] 00:06:22.889 [2024-07-24 21:28:07.631583] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.889 [2024-07-24 21:28:07.743455] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.889 [2024-07-24 21:28:07.816416] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:23.458  Copying: 512/512 [B] (average 166 kBps) 00:06:23.458 00:06:23.458 21:28:08 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ zrinqs31jfec695mgzoxa2zvvhmmf18q8a484ovw4wciwxv1nyfsnkvmu7legkpatbwy58958uqksmhov59euobaqv6it75b4jwb6a2csmwssllpatgddyx7bom277p2jgmf77bp1glno3tonnnfcl0px9iujez665pyuru4lwca47s93uuwqyjmjlsnaj0c981fjitiskx94zym53kw8q61d3u7jldss16zyx9qni0hanse7ojaggjeh3pzv5kwwx84oxnk1wjtv0hdr49ycp8e9oe4yjicd8754wnu0ccgb4n7o9lpoaothmsu28vkfkphg1x8a0fi0gzufypp5ddabccdr05ddm2ypdkl1xghjvgmv55iwdadygvktp65gp5zqutln8u686q9wr8g52u3mdzk8buv5j16bqmwsvz5zbdoxknk2pfsyiolqs5q9mg8h1c7p783ao8kfgdheqx0kwkenmwc96rbbs578usxp0asofehw2hfm1oglr41 == \z\r\i\n\q\s\3\1\j\f\e\c\6\9\5\m\g\z\o\x\a\2\z\v\v\h\m\m\f\1\8\q\8\a\4\8\4\o\v\w\4\w\c\i\w\x\v\1\n\y\f\s\n\k\v\m\u\7\l\e\g\k\p\a\t\b\w\y\5\8\9\5\8\u\q\k\s\m\h\o\v\5\9\e\u\o\b\a\q\v\6\i\t\7\5\b\4\j\w\b\6\a\2\c\s\m\w\s\s\l\l\p\a\t\g\d\d\y\x\7\b\o\m\2\7\7\p\2\j\g\m\f\7\7\b\p\1\g\l\n\o\3\t\o\n\n\n\f\c\l\0\p\x\9\i\u\j\e\z\6\6\5\p\y\u\r\u\4\l\w\c\a\4\7\s\9\3\u\u\w\q\y\j\m\j\l\s\n\a\j\0\c\9\8\1\f\j\i\t\i\s\k\x\9\4\z\y\m\5\3\k\w\8\q\6\1\d\3\u\7\j\l\d\s\s\1\6\z\y\x\9\q\n\i\0\h\a\n\s\e\7\o\j\a\g\g\j\e\h\3\p\z\v\5\k\w\w\x\8\4\o\x\n\k\1\w\j\t\v\0\h\d\r\4\9\y\c\p\8\e\9\o\e\4\y\j\i\c\d\8\7\5\4\w\n\u\0\c\c\g\b\4\n\7\o\9\l\p\o\a\o\t\h\m\s\u\2\8\v\k\f\k\p\h\g\1\x\8\a\0\f\i\0\g\z\u\f\y\p\p\5\d\d\a\b\c\c\d\r\0\5\d\d\m\2\y\p\d\k\l\1\x\g\h\j\v\g\m\v\5\5\i\w\d\a\d\y\g\v\k\t\p\6\5\g\p\5\z\q\u\t\l\n\8\u\6\8\6\q\9\w\r\8\g\5\2\u\3\m\d\z\k\8\b\u\v\5\j\1\6\b\q\m\w\s\v\z\5\z\b\d\o\x\k\n\k\2\p\f\s\y\i\o\l\q\s\5\q\9\m\g\8\h\1\c\7\p\7\8\3\a\o\8\k\f\g\d\h\e\q\x\0\k\w\k\e\n\m\w\c\9\6\r\b\b\s\5\7\8\u\s\x\p\0\a\s\o\f\e\h\w\2\h\f\m\1\o\g\l\r\4\1 ]] 00:06:23.458 21:28:08 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:23.458 21:28:08 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:23.458 [2024-07-24 21:28:08.245938] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:06:23.458 [2024-07-24 21:28:08.246056] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62603 ] 00:06:23.458 [2024-07-24 21:28:08.381448] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.717 [2024-07-24 21:28:08.483368] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.717 [2024-07-24 21:28:08.556485] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:23.976  Copying: 512/512 [B] (average 500 kBps) 00:06:23.976 00:06:23.977 21:28:08 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ zrinqs31jfec695mgzoxa2zvvhmmf18q8a484ovw4wciwxv1nyfsnkvmu7legkpatbwy58958uqksmhov59euobaqv6it75b4jwb6a2csmwssllpatgddyx7bom277p2jgmf77bp1glno3tonnnfcl0px9iujez665pyuru4lwca47s93uuwqyjmjlsnaj0c981fjitiskx94zym53kw8q61d3u7jldss16zyx9qni0hanse7ojaggjeh3pzv5kwwx84oxnk1wjtv0hdr49ycp8e9oe4yjicd8754wnu0ccgb4n7o9lpoaothmsu28vkfkphg1x8a0fi0gzufypp5ddabccdr05ddm2ypdkl1xghjvgmv55iwdadygvktp65gp5zqutln8u686q9wr8g52u3mdzk8buv5j16bqmwsvz5zbdoxknk2pfsyiolqs5q9mg8h1c7p783ao8kfgdheqx0kwkenmwc96rbbs578usxp0asofehw2hfm1oglr41 == \z\r\i\n\q\s\3\1\j\f\e\c\6\9\5\m\g\z\o\x\a\2\z\v\v\h\m\m\f\1\8\q\8\a\4\8\4\o\v\w\4\w\c\i\w\x\v\1\n\y\f\s\n\k\v\m\u\7\l\e\g\k\p\a\t\b\w\y\5\8\9\5\8\u\q\k\s\m\h\o\v\5\9\e\u\o\b\a\q\v\6\i\t\7\5\b\4\j\w\b\6\a\2\c\s\m\w\s\s\l\l\p\a\t\g\d\d\y\x\7\b\o\m\2\7\7\p\2\j\g\m\f\7\7\b\p\1\g\l\n\o\3\t\o\n\n\n\f\c\l\0\p\x\9\i\u\j\e\z\6\6\5\p\y\u\r\u\4\l\w\c\a\4\7\s\9\3\u\u\w\q\y\j\m\j\l\s\n\a\j\0\c\9\8\1\f\j\i\t\i\s\k\x\9\4\z\y\m\5\3\k\w\8\q\6\1\d\3\u\7\j\l\d\s\s\1\6\z\y\x\9\q\n\i\0\h\a\n\s\e\7\o\j\a\g\g\j\e\h\3\p\z\v\5\k\w\w\x\8\4\o\x\n\k\1\w\j\t\v\0\h\d\r\4\9\y\c\p\8\e\9\o\e\4\y\j\i\c\d\8\7\5\4\w\n\u\0\c\c\g\b\4\n\7\o\9\l\p\o\a\o\t\h\m\s\u\2\8\v\k\f\k\p\h\g\1\x\8\a\0\f\i\0\g\z\u\f\y\p\p\5\d\d\a\b\c\c\d\r\0\5\d\d\m\2\y\p\d\k\l\1\x\g\h\j\v\g\m\v\5\5\i\w\d\a\d\y\g\v\k\t\p\6\5\g\p\5\z\q\u\t\l\n\8\u\6\8\6\q\9\w\r\8\g\5\2\u\3\m\d\z\k\8\b\u\v\5\j\1\6\b\q\m\w\s\v\z\5\z\b\d\o\x\k\n\k\2\p\f\s\y\i\o\l\q\s\5\q\9\m\g\8\h\1\c\7\p\7\8\3\a\o\8\k\f\g\d\h\e\q\x\0\k\w\k\e\n\m\w\c\9\6\r\b\b\s\5\7\8\u\s\x\p\0\a\s\o\f\e\h\w\2\h\f\m\1\o\g\l\r\4\1 ]] 00:06:23.977 21:28:08 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:23.977 21:28:08 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:06:23.977 21:28:08 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:23.977 21:28:08 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:23.977 21:28:08 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:23.977 21:28:08 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:24.236 [2024-07-24 21:28:09.000956] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:06:24.236 [2024-07-24 21:28:09.001098] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62617 ] 00:06:24.236 [2024-07-24 21:28:09.136850] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.495 [2024-07-24 21:28:09.241873] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.495 [2024-07-24 21:28:09.313786] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:24.755  Copying: 512/512 [B] (average 500 kBps) 00:06:24.755 00:06:24.755 21:28:09 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ xhxdc4aqif56zw42evlfv94hq9e3qm3ytu15z0xgeva0fqic8tn3la2w8yf29c1k2obitzizzqceir9y2dk8pltzabvlm55dblnav7o2b72da6ipwb6nf8yc6lyqz85xfj4r7xutyundbaoek4m2gz9wn1osa3iu4ht3pwttuhnkbsg4xb67nulpmreghzj4bucb49kn2wpx358bqoifqaeo4lu3kakg0iyisss2thlcaprds6jm2tk7xer2jsdkvfjbjwlqhrurxja3lj6hptu5w19c2cjxk7glcs0ds1ovnyi1vujhnma7lcieyqw621b1sg6b1hq2yqhf2z9umbdjidbdp4py5n578ba8vs8vwc839vcvomiiu7p67ogjy0qpeh3l3k319wply1dz2p4bjic0uc24j55gb6eysrm6d0jho7azr4fqcxi4l4433sohnv5w16giva10usidjp5z5oe744b0voh6zes7r39p55hwr4g8xvgvtnxhkmni == \x\h\x\d\c\4\a\q\i\f\5\6\z\w\4\2\e\v\l\f\v\9\4\h\q\9\e\3\q\m\3\y\t\u\1\5\z\0\x\g\e\v\a\0\f\q\i\c\8\t\n\3\l\a\2\w\8\y\f\2\9\c\1\k\2\o\b\i\t\z\i\z\z\q\c\e\i\r\9\y\2\d\k\8\p\l\t\z\a\b\v\l\m\5\5\d\b\l\n\a\v\7\o\2\b\7\2\d\a\6\i\p\w\b\6\n\f\8\y\c\6\l\y\q\z\8\5\x\f\j\4\r\7\x\u\t\y\u\n\d\b\a\o\e\k\4\m\2\g\z\9\w\n\1\o\s\a\3\i\u\4\h\t\3\p\w\t\t\u\h\n\k\b\s\g\4\x\b\6\7\n\u\l\p\m\r\e\g\h\z\j\4\b\u\c\b\4\9\k\n\2\w\p\x\3\5\8\b\q\o\i\f\q\a\e\o\4\l\u\3\k\a\k\g\0\i\y\i\s\s\s\2\t\h\l\c\a\p\r\d\s\6\j\m\2\t\k\7\x\e\r\2\j\s\d\k\v\f\j\b\j\w\l\q\h\r\u\r\x\j\a\3\l\j\6\h\p\t\u\5\w\1\9\c\2\c\j\x\k\7\g\l\c\s\0\d\s\1\o\v\n\y\i\1\v\u\j\h\n\m\a\7\l\c\i\e\y\q\w\6\2\1\b\1\s\g\6\b\1\h\q\2\y\q\h\f\2\z\9\u\m\b\d\j\i\d\b\d\p\4\p\y\5\n\5\7\8\b\a\8\v\s\8\v\w\c\8\3\9\v\c\v\o\m\i\i\u\7\p\6\7\o\g\j\y\0\q\p\e\h\3\l\3\k\3\1\9\w\p\l\y\1\d\z\2\p\4\b\j\i\c\0\u\c\2\4\j\5\5\g\b\6\e\y\s\r\m\6\d\0\j\h\o\7\a\z\r\4\f\q\c\x\i\4\l\4\4\3\3\s\o\h\n\v\5\w\1\6\g\i\v\a\1\0\u\s\i\d\j\p\5\z\5\o\e\7\4\4\b\0\v\o\h\6\z\e\s\7\r\3\9\p\5\5\h\w\r\4\g\8\x\v\g\v\t\n\x\h\k\m\n\i ]] 00:06:24.755 21:28:09 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:24.755 21:28:09 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:24.755 [2024-07-24 21:28:09.722981] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:06:24.755 [2024-07-24 21:28:09.723081] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62619 ] 00:06:25.015 [2024-07-24 21:28:09.852247] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.015 [2024-07-24 21:28:09.970206] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.274 [2024-07-24 21:28:10.043449] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:25.534  Copying: 512/512 [B] (average 500 kBps) 00:06:25.534 00:06:25.534 21:28:10 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ xhxdc4aqif56zw42evlfv94hq9e3qm3ytu15z0xgeva0fqic8tn3la2w8yf29c1k2obitzizzqceir9y2dk8pltzabvlm55dblnav7o2b72da6ipwb6nf8yc6lyqz85xfj4r7xutyundbaoek4m2gz9wn1osa3iu4ht3pwttuhnkbsg4xb67nulpmreghzj4bucb49kn2wpx358bqoifqaeo4lu3kakg0iyisss2thlcaprds6jm2tk7xer2jsdkvfjbjwlqhrurxja3lj6hptu5w19c2cjxk7glcs0ds1ovnyi1vujhnma7lcieyqw621b1sg6b1hq2yqhf2z9umbdjidbdp4py5n578ba8vs8vwc839vcvomiiu7p67ogjy0qpeh3l3k319wply1dz2p4bjic0uc24j55gb6eysrm6d0jho7azr4fqcxi4l4433sohnv5w16giva10usidjp5z5oe744b0voh6zes7r39p55hwr4g8xvgvtnxhkmni == \x\h\x\d\c\4\a\q\i\f\5\6\z\w\4\2\e\v\l\f\v\9\4\h\q\9\e\3\q\m\3\y\t\u\1\5\z\0\x\g\e\v\a\0\f\q\i\c\8\t\n\3\l\a\2\w\8\y\f\2\9\c\1\k\2\o\b\i\t\z\i\z\z\q\c\e\i\r\9\y\2\d\k\8\p\l\t\z\a\b\v\l\m\5\5\d\b\l\n\a\v\7\o\2\b\7\2\d\a\6\i\p\w\b\6\n\f\8\y\c\6\l\y\q\z\8\5\x\f\j\4\r\7\x\u\t\y\u\n\d\b\a\o\e\k\4\m\2\g\z\9\w\n\1\o\s\a\3\i\u\4\h\t\3\p\w\t\t\u\h\n\k\b\s\g\4\x\b\6\7\n\u\l\p\m\r\e\g\h\z\j\4\b\u\c\b\4\9\k\n\2\w\p\x\3\5\8\b\q\o\i\f\q\a\e\o\4\l\u\3\k\a\k\g\0\i\y\i\s\s\s\2\t\h\l\c\a\p\r\d\s\6\j\m\2\t\k\7\x\e\r\2\j\s\d\k\v\f\j\b\j\w\l\q\h\r\u\r\x\j\a\3\l\j\6\h\p\t\u\5\w\1\9\c\2\c\j\x\k\7\g\l\c\s\0\d\s\1\o\v\n\y\i\1\v\u\j\h\n\m\a\7\l\c\i\e\y\q\w\6\2\1\b\1\s\g\6\b\1\h\q\2\y\q\h\f\2\z\9\u\m\b\d\j\i\d\b\d\p\4\p\y\5\n\5\7\8\b\a\8\v\s\8\v\w\c\8\3\9\v\c\v\o\m\i\i\u\7\p\6\7\o\g\j\y\0\q\p\e\h\3\l\3\k\3\1\9\w\p\l\y\1\d\z\2\p\4\b\j\i\c\0\u\c\2\4\j\5\5\g\b\6\e\y\s\r\m\6\d\0\j\h\o\7\a\z\r\4\f\q\c\x\i\4\l\4\4\3\3\s\o\h\n\v\5\w\1\6\g\i\v\a\1\0\u\s\i\d\j\p\5\z\5\o\e\7\4\4\b\0\v\o\h\6\z\e\s\7\r\3\9\p\5\5\h\w\r\4\g\8\x\v\g\v\t\n\x\h\k\m\n\i ]] 00:06:25.534 21:28:10 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:25.534 21:28:10 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:25.534 [2024-07-24 21:28:10.470084] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:06:25.534 [2024-07-24 21:28:10.470214] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62632 ] 00:06:25.794 [2024-07-24 21:28:10.605383] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.794 [2024-07-24 21:28:10.719452] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.794 [2024-07-24 21:28:10.792069] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:26.314  Copying: 512/512 [B] (average 500 kBps) 00:06:26.314 00:06:26.314 21:28:11 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ xhxdc4aqif56zw42evlfv94hq9e3qm3ytu15z0xgeva0fqic8tn3la2w8yf29c1k2obitzizzqceir9y2dk8pltzabvlm55dblnav7o2b72da6ipwb6nf8yc6lyqz85xfj4r7xutyundbaoek4m2gz9wn1osa3iu4ht3pwttuhnkbsg4xb67nulpmreghzj4bucb49kn2wpx358bqoifqaeo4lu3kakg0iyisss2thlcaprds6jm2tk7xer2jsdkvfjbjwlqhrurxja3lj6hptu5w19c2cjxk7glcs0ds1ovnyi1vujhnma7lcieyqw621b1sg6b1hq2yqhf2z9umbdjidbdp4py5n578ba8vs8vwc839vcvomiiu7p67ogjy0qpeh3l3k319wply1dz2p4bjic0uc24j55gb6eysrm6d0jho7azr4fqcxi4l4433sohnv5w16giva10usidjp5z5oe744b0voh6zes7r39p55hwr4g8xvgvtnxhkmni == \x\h\x\d\c\4\a\q\i\f\5\6\z\w\4\2\e\v\l\f\v\9\4\h\q\9\e\3\q\m\3\y\t\u\1\5\z\0\x\g\e\v\a\0\f\q\i\c\8\t\n\3\l\a\2\w\8\y\f\2\9\c\1\k\2\o\b\i\t\z\i\z\z\q\c\e\i\r\9\y\2\d\k\8\p\l\t\z\a\b\v\l\m\5\5\d\b\l\n\a\v\7\o\2\b\7\2\d\a\6\i\p\w\b\6\n\f\8\y\c\6\l\y\q\z\8\5\x\f\j\4\r\7\x\u\t\y\u\n\d\b\a\o\e\k\4\m\2\g\z\9\w\n\1\o\s\a\3\i\u\4\h\t\3\p\w\t\t\u\h\n\k\b\s\g\4\x\b\6\7\n\u\l\p\m\r\e\g\h\z\j\4\b\u\c\b\4\9\k\n\2\w\p\x\3\5\8\b\q\o\i\f\q\a\e\o\4\l\u\3\k\a\k\g\0\i\y\i\s\s\s\2\t\h\l\c\a\p\r\d\s\6\j\m\2\t\k\7\x\e\r\2\j\s\d\k\v\f\j\b\j\w\l\q\h\r\u\r\x\j\a\3\l\j\6\h\p\t\u\5\w\1\9\c\2\c\j\x\k\7\g\l\c\s\0\d\s\1\o\v\n\y\i\1\v\u\j\h\n\m\a\7\l\c\i\e\y\q\w\6\2\1\b\1\s\g\6\b\1\h\q\2\y\q\h\f\2\z\9\u\m\b\d\j\i\d\b\d\p\4\p\y\5\n\5\7\8\b\a\8\v\s\8\v\w\c\8\3\9\v\c\v\o\m\i\i\u\7\p\6\7\o\g\j\y\0\q\p\e\h\3\l\3\k\3\1\9\w\p\l\y\1\d\z\2\p\4\b\j\i\c\0\u\c\2\4\j\5\5\g\b\6\e\y\s\r\m\6\d\0\j\h\o\7\a\z\r\4\f\q\c\x\i\4\l\4\4\3\3\s\o\h\n\v\5\w\1\6\g\i\v\a\1\0\u\s\i\d\j\p\5\z\5\o\e\7\4\4\b\0\v\o\h\6\z\e\s\7\r\3\9\p\5\5\h\w\r\4\g\8\x\v\g\v\t\n\x\h\k\m\n\i ]] 00:06:26.314 21:28:11 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:26.314 21:28:11 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:26.314 [2024-07-24 21:28:11.220217] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:06:26.314 [2024-07-24 21:28:11.220320] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62645 ] 00:06:26.573 [2024-07-24 21:28:11.352933] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.574 [2024-07-24 21:28:11.457134] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.574 [2024-07-24 21:28:11.528221] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:27.141  Copying: 512/512 [B] (average 250 kBps) 00:06:27.141 00:06:27.141 21:28:11 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ xhxdc4aqif56zw42evlfv94hq9e3qm3ytu15z0xgeva0fqic8tn3la2w8yf29c1k2obitzizzqceir9y2dk8pltzabvlm55dblnav7o2b72da6ipwb6nf8yc6lyqz85xfj4r7xutyundbaoek4m2gz9wn1osa3iu4ht3pwttuhnkbsg4xb67nulpmreghzj4bucb49kn2wpx358bqoifqaeo4lu3kakg0iyisss2thlcaprds6jm2tk7xer2jsdkvfjbjwlqhrurxja3lj6hptu5w19c2cjxk7glcs0ds1ovnyi1vujhnma7lcieyqw621b1sg6b1hq2yqhf2z9umbdjidbdp4py5n578ba8vs8vwc839vcvomiiu7p67ogjy0qpeh3l3k319wply1dz2p4bjic0uc24j55gb6eysrm6d0jho7azr4fqcxi4l4433sohnv5w16giva10usidjp5z5oe744b0voh6zes7r39p55hwr4g8xvgvtnxhkmni == \x\h\x\d\c\4\a\q\i\f\5\6\z\w\4\2\e\v\l\f\v\9\4\h\q\9\e\3\q\m\3\y\t\u\1\5\z\0\x\g\e\v\a\0\f\q\i\c\8\t\n\3\l\a\2\w\8\y\f\2\9\c\1\k\2\o\b\i\t\z\i\z\z\q\c\e\i\r\9\y\2\d\k\8\p\l\t\z\a\b\v\l\m\5\5\d\b\l\n\a\v\7\o\2\b\7\2\d\a\6\i\p\w\b\6\n\f\8\y\c\6\l\y\q\z\8\5\x\f\j\4\r\7\x\u\t\y\u\n\d\b\a\o\e\k\4\m\2\g\z\9\w\n\1\o\s\a\3\i\u\4\h\t\3\p\w\t\t\u\h\n\k\b\s\g\4\x\b\6\7\n\u\l\p\m\r\e\g\h\z\j\4\b\u\c\b\4\9\k\n\2\w\p\x\3\5\8\b\q\o\i\f\q\a\e\o\4\l\u\3\k\a\k\g\0\i\y\i\s\s\s\2\t\h\l\c\a\p\r\d\s\6\j\m\2\t\k\7\x\e\r\2\j\s\d\k\v\f\j\b\j\w\l\q\h\r\u\r\x\j\a\3\l\j\6\h\p\t\u\5\w\1\9\c\2\c\j\x\k\7\g\l\c\s\0\d\s\1\o\v\n\y\i\1\v\u\j\h\n\m\a\7\l\c\i\e\y\q\w\6\2\1\b\1\s\g\6\b\1\h\q\2\y\q\h\f\2\z\9\u\m\b\d\j\i\d\b\d\p\4\p\y\5\n\5\7\8\b\a\8\v\s\8\v\w\c\8\3\9\v\c\v\o\m\i\i\u\7\p\6\7\o\g\j\y\0\q\p\e\h\3\l\3\k\3\1\9\w\p\l\y\1\d\z\2\p\4\b\j\i\c\0\u\c\2\4\j\5\5\g\b\6\e\y\s\r\m\6\d\0\j\h\o\7\a\z\r\4\f\q\c\x\i\4\l\4\4\3\3\s\o\h\n\v\5\w\1\6\g\i\v\a\1\0\u\s\i\d\j\p\5\z\5\o\e\7\4\4\b\0\v\o\h\6\z\e\s\7\r\3\9\p\5\5\h\w\r\4\g\8\x\v\g\v\t\n\x\h\k\m\n\i ]] 00:06:27.141 00:06:27.141 real 0m5.928s 00:06:27.141 user 0m3.505s 00:06:27.141 sys 0m1.442s 00:06:27.141 21:28:11 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:27.141 ************************************ 00:06:27.141 END TEST dd_flags_misc_forced_aio 00:06:27.141 21:28:11 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:27.141 ************************************ 00:06:27.141 21:28:11 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:06:27.141 21:28:11 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:27.141 21:28:11 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:27.141 00:06:27.141 real 0m26.233s 00:06:27.141 user 0m13.984s 00:06:27.141 sys 0m8.683s 00:06:27.141 21:28:11 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:27.141 21:28:11 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:27.141 ************************************ 00:06:27.141 END TEST spdk_dd_posix 00:06:27.141 ************************************ 00:06:27.141 21:28:11 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:06:27.141 21:28:11 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:27.141 21:28:11 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:27.141 21:28:11 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:27.141 ************************************ 00:06:27.141 START TEST spdk_dd_malloc 00:06:27.141 ************************************ 00:06:27.141 21:28:12 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:06:27.141 * Looking for test storage... 00:06:27.141 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:27.141 21:28:12 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:27.141 21:28:12 spdk_dd.spdk_dd_malloc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:27.141 21:28:12 spdk_dd.spdk_dd_malloc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:27.141 21:28:12 spdk_dd.spdk_dd_malloc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:27.142 21:28:12 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.142 21:28:12 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.142 21:28:12 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.142 21:28:12 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 00:06:27.142 21:28:12 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.142 21:28:12 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:06:27.142 21:28:12 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:27.142 21:28:12 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:27.142 21:28:12 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:06:27.142 ************************************ 00:06:27.142 START TEST dd_malloc_copy 00:06:27.142 ************************************ 00:06:27.142 21:28:12 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1125 -- # malloc_copy 00:06:27.142 21:28:12 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:06:27.142 21:28:12 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:06:27.142 21:28:12 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:06:27.142 21:28:12 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:06:27.142 21:28:12 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:06:27.142 21:28:12 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:06:27.142 21:28:12 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:06:27.142 21:28:12 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:06:27.142 21:28:12 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:27.142 21:28:12 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:06:27.401 [2024-07-24 21:28:12.158595] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:06:27.401 [2024-07-24 21:28:12.158709] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62719 ] 00:06:27.401 { 00:06:27.401 "subsystems": [ 00:06:27.401 { 00:06:27.401 "subsystem": "bdev", 00:06:27.401 "config": [ 00:06:27.401 { 00:06:27.401 "params": { 00:06:27.401 "block_size": 512, 00:06:27.401 "num_blocks": 1048576, 00:06:27.401 "name": "malloc0" 00:06:27.401 }, 00:06:27.401 "method": "bdev_malloc_create" 00:06:27.401 }, 00:06:27.401 { 00:06:27.401 "params": { 00:06:27.401 "block_size": 512, 00:06:27.401 "num_blocks": 1048576, 00:06:27.401 "name": "malloc1" 00:06:27.401 }, 00:06:27.401 "method": "bdev_malloc_create" 00:06:27.401 }, 00:06:27.401 { 00:06:27.401 "method": "bdev_wait_for_examine" 00:06:27.401 } 00:06:27.401 ] 00:06:27.401 } 00:06:27.401 ] 00:06:27.401 } 00:06:27.401 [2024-07-24 21:28:12.295478] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.659 [2024-07-24 21:28:12.404923] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.659 [2024-07-24 21:28:12.477113] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:31.168  Copying: 216/512 [MB] (216 MBps) Copying: 449/512 [MB] (232 MBps) Copying: 512/512 [MB] (average 223 MBps) 00:06:31.168 00:06:31.168 21:28:16 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:06:31.168 21:28:16 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:06:31.168 21:28:16 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:31.168 21:28:16 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:06:31.168 [2024-07-24 21:28:16.101930] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:06:31.168 [2024-07-24 21:28:16.102052] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62762 ] 00:06:31.168 { 00:06:31.168 "subsystems": [ 00:06:31.168 { 00:06:31.168 "subsystem": "bdev", 00:06:31.168 "config": [ 00:06:31.168 { 00:06:31.168 "params": { 00:06:31.168 "block_size": 512, 00:06:31.168 "num_blocks": 1048576, 00:06:31.168 "name": "malloc0" 00:06:31.168 }, 00:06:31.168 "method": "bdev_malloc_create" 00:06:31.168 }, 00:06:31.168 { 00:06:31.168 "params": { 00:06:31.168 "block_size": 512, 00:06:31.168 "num_blocks": 1048576, 00:06:31.168 "name": "malloc1" 00:06:31.168 }, 00:06:31.168 "method": "bdev_malloc_create" 00:06:31.168 }, 00:06:31.168 { 00:06:31.168 "method": "bdev_wait_for_examine" 00:06:31.168 } 00:06:31.168 ] 00:06:31.168 } 00:06:31.168 ] 00:06:31.168 } 00:06:31.429 [2024-07-24 21:28:16.242661] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.429 [2024-07-24 21:28:16.358805] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.688 [2024-07-24 21:28:16.436357] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:35.142  Copying: 214/512 [MB] (214 MBps) Copying: 427/512 [MB] (213 MBps) Copying: 512/512 [MB] (average 213 MBps) 00:06:35.142 00:06:35.142 00:06:35.142 real 0m8.019s 00:06:35.142 user 0m6.732s 00:06:35.142 sys 0m1.128s 00:06:35.142 21:28:20 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:35.142 ************************************ 00:06:35.142 END TEST dd_malloc_copy 00:06:35.142 ************************************ 00:06:35.142 21:28:20 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:06:35.402 00:06:35.402 real 0m8.163s 00:06:35.402 user 0m6.788s 00:06:35.402 sys 0m1.215s 00:06:35.402 21:28:20 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:35.402 21:28:20 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:06:35.402 ************************************ 00:06:35.402 END TEST spdk_dd_malloc 00:06:35.402 ************************************ 00:06:35.402 21:28:20 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:06:35.402 21:28:20 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:35.402 21:28:20 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:35.402 21:28:20 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:35.402 ************************************ 00:06:35.402 START TEST spdk_dd_bdev_to_bdev 00:06:35.402 ************************************ 00:06:35.402 21:28:20 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:06:35.402 * Looking for test storage... 00:06:35.402 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:35.402 21:28:20 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:35.402 21:28:20 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:35.402 21:28:20 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:35.402 21:28:20 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:35.402 21:28:20 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:35.402 21:28:20 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:35.402 21:28:20 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:35.402 21:28:20 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 00:06:35.402 21:28:20 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:35.402 21:28:20 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:06:35.402 21:28:20 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:06:35.402 21:28:20 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:06:35.402 21:28:20 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:06:35.402 21:28:20 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:06:35.402 21:28:20 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:06:35.402 21:28:20 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:10.0 00:06:35.402 21:28:20 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:06:35.402 21:28:20 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:06:35.402 21:28:20 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:11.0 00:06:35.402 21:28:20 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:06:35.402 21:28:20 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:06:35.402 21:28:20 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:11.0' ['trtype']='pcie') 00:06:35.402 21:28:20 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:06:35.402 21:28:20 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:35.402 21:28:20 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:35.402 21:28:20 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:06:35.402 21:28:20 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:06:35.402 21:28:20 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:06:35.402 21:28:20 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:06:35.402 21:28:20 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:35.402 21:28:20 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:35.402 ************************************ 00:06:35.402 START TEST dd_inflate_file 00:06:35.402 ************************************ 00:06:35.402 21:28:20 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:06:35.402 [2024-07-24 21:28:20.361083] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:06:35.402 [2024-07-24 21:28:20.361179] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62877 ] 00:06:35.662 [2024-07-24 21:28:20.498445] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.662 [2024-07-24 21:28:20.597815] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.921 [2024-07-24 21:28:20.667394] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:36.180  Copying: 64/64 [MB] (average 1560 MBps) 00:06:36.180 00:06:36.180 00:06:36.180 real 0m0.687s 00:06:36.180 user 0m0.403s 00:06:36.180 sys 0m0.355s 00:06:36.180 21:28:20 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:36.180 ************************************ 00:06:36.180 END TEST dd_inflate_file 00:06:36.180 ************************************ 00:06:36.180 21:28:20 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:06:36.180 21:28:21 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:06:36.180 21:28:21 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:06:36.180 21:28:21 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:06:36.180 21:28:21 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:06:36.180 21:28:21 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:06:36.180 21:28:21 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:06:36.180 21:28:21 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:36.180 21:28:21 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:36.180 21:28:21 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:36.180 ************************************ 00:06:36.180 START TEST dd_copy_to_out_bdev 00:06:36.180 ************************************ 00:06:36.180 21:28:21 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:06:36.180 [2024-07-24 21:28:21.100115] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:06:36.180 [2024-07-24 21:28:21.100203] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62910 ] 00:06:36.180 { 00:06:36.180 "subsystems": [ 00:06:36.180 { 00:06:36.180 "subsystem": "bdev", 00:06:36.180 "config": [ 00:06:36.180 { 00:06:36.180 "params": { 00:06:36.180 "trtype": "pcie", 00:06:36.180 "traddr": "0000:00:10.0", 00:06:36.180 "name": "Nvme0" 00:06:36.180 }, 00:06:36.180 "method": "bdev_nvme_attach_controller" 00:06:36.180 }, 00:06:36.180 { 00:06:36.180 "params": { 00:06:36.180 "trtype": "pcie", 00:06:36.180 "traddr": "0000:00:11.0", 00:06:36.180 "name": "Nvme1" 00:06:36.180 }, 00:06:36.180 "method": "bdev_nvme_attach_controller" 00:06:36.180 }, 00:06:36.180 { 00:06:36.180 "method": "bdev_wait_for_examine" 00:06:36.180 } 00:06:36.180 ] 00:06:36.180 } 00:06:36.180 ] 00:06:36.180 } 00:06:36.438 [2024-07-24 21:28:21.230479] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.438 [2024-07-24 21:28:21.342279] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.438 [2024-07-24 21:28:21.418188] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:38.332  Copying: 52/64 [MB] (52 MBps) Copying: 64/64 [MB] (average 51 MBps) 00:06:38.332 00:06:38.332 00:06:38.332 real 0m2.159s 00:06:38.332 user 0m1.874s 00:06:38.332 sys 0m1.699s 00:06:38.332 21:28:23 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:38.332 ************************************ 00:06:38.332 END TEST dd_copy_to_out_bdev 00:06:38.333 ************************************ 00:06:38.333 21:28:23 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:38.333 21:28:23 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:06:38.333 21:28:23 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:06:38.333 21:28:23 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:38.333 21:28:23 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:38.333 21:28:23 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:38.333 ************************************ 00:06:38.333 START TEST dd_offset_magic 00:06:38.333 ************************************ 00:06:38.333 21:28:23 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1125 -- # offset_magic 00:06:38.333 21:28:23 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:06:38.333 21:28:23 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:06:38.333 21:28:23 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:06:38.333 21:28:23 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:06:38.333 21:28:23 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:06:38.333 21:28:23 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:06:38.333 21:28:23 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:06:38.333 21:28:23 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:06:38.333 [2024-07-24 21:28:23.324878] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:06:38.333 [2024-07-24 21:28:23.324994] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62961 ] 00:06:38.593 { 00:06:38.593 "subsystems": [ 00:06:38.593 { 00:06:38.593 "subsystem": "bdev", 00:06:38.593 "config": [ 00:06:38.593 { 00:06:38.593 "params": { 00:06:38.593 "trtype": "pcie", 00:06:38.593 "traddr": "0000:00:10.0", 00:06:38.593 "name": "Nvme0" 00:06:38.593 }, 00:06:38.593 "method": "bdev_nvme_attach_controller" 00:06:38.593 }, 00:06:38.593 { 00:06:38.593 "params": { 00:06:38.593 "trtype": "pcie", 00:06:38.593 "traddr": "0000:00:11.0", 00:06:38.593 "name": "Nvme1" 00:06:38.593 }, 00:06:38.593 "method": "bdev_nvme_attach_controller" 00:06:38.593 }, 00:06:38.593 { 00:06:38.593 "method": "bdev_wait_for_examine" 00:06:38.593 } 00:06:38.593 ] 00:06:38.593 } 00:06:38.593 ] 00:06:38.593 } 00:06:38.593 [2024-07-24 21:28:23.455776] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.593 [2024-07-24 21:28:23.563123] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.853 [2024-07-24 21:28:23.638611] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:39.367  Copying: 65/65 [MB] (average 855 MBps) 00:06:39.367 00:06:39.367 21:28:24 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:06:39.367 21:28:24 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:06:39.367 21:28:24 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:06:39.367 21:28:24 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:06:39.367 [2024-07-24 21:28:24.271189] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:06:39.367 [2024-07-24 21:28:24.271300] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62976 ] 00:06:39.367 { 00:06:39.368 "subsystems": [ 00:06:39.368 { 00:06:39.368 "subsystem": "bdev", 00:06:39.368 "config": [ 00:06:39.368 { 00:06:39.368 "params": { 00:06:39.368 "trtype": "pcie", 00:06:39.368 "traddr": "0000:00:10.0", 00:06:39.368 "name": "Nvme0" 00:06:39.368 }, 00:06:39.368 "method": "bdev_nvme_attach_controller" 00:06:39.368 }, 00:06:39.368 { 00:06:39.368 "params": { 00:06:39.368 "trtype": "pcie", 00:06:39.368 "traddr": "0000:00:11.0", 00:06:39.368 "name": "Nvme1" 00:06:39.368 }, 00:06:39.368 "method": "bdev_nvme_attach_controller" 00:06:39.368 }, 00:06:39.368 { 00:06:39.368 "method": "bdev_wait_for_examine" 00:06:39.368 } 00:06:39.368 ] 00:06:39.368 } 00:06:39.368 ] 00:06:39.368 } 00:06:39.625 [2024-07-24 21:28:24.410274] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.625 [2024-07-24 21:28:24.504937] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.625 [2024-07-24 21:28:24.575489] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:40.141  Copying: 1024/1024 [kB] (average 500 MBps) 00:06:40.141 00:06:40.141 21:28:25 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:06:40.141 21:28:25 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:06:40.141 21:28:25 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:06:40.141 21:28:25 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:06:40.141 21:28:25 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:06:40.141 21:28:25 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:06:40.141 21:28:25 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:06:40.141 [2024-07-24 21:28:25.086651] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:06:40.141 [2024-07-24 21:28:25.086743] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62998 ] 00:06:40.141 { 00:06:40.141 "subsystems": [ 00:06:40.141 { 00:06:40.141 "subsystem": "bdev", 00:06:40.141 "config": [ 00:06:40.141 { 00:06:40.141 "params": { 00:06:40.141 "trtype": "pcie", 00:06:40.141 "traddr": "0000:00:10.0", 00:06:40.141 "name": "Nvme0" 00:06:40.141 }, 00:06:40.141 "method": "bdev_nvme_attach_controller" 00:06:40.141 }, 00:06:40.141 { 00:06:40.141 "params": { 00:06:40.141 "trtype": "pcie", 00:06:40.141 "traddr": "0000:00:11.0", 00:06:40.141 "name": "Nvme1" 00:06:40.141 }, 00:06:40.141 "method": "bdev_nvme_attach_controller" 00:06:40.141 }, 00:06:40.141 { 00:06:40.141 "method": "bdev_wait_for_examine" 00:06:40.141 } 00:06:40.141 ] 00:06:40.141 } 00:06:40.141 ] 00:06:40.141 } 00:06:40.399 [2024-07-24 21:28:25.225925] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.399 [2024-07-24 21:28:25.329680] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.399 [2024-07-24 21:28:25.398465] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:41.222  Copying: 65/65 [MB] (average 866 MBps) 00:06:41.222 00:06:41.222 21:28:25 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:06:41.222 21:28:25 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:06:41.222 21:28:25 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:06:41.222 21:28:25 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:06:41.222 [2024-07-24 21:28:26.038881] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:06:41.222 [2024-07-24 21:28:26.040286] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63018 ] 00:06:41.222 { 00:06:41.222 "subsystems": [ 00:06:41.222 { 00:06:41.222 "subsystem": "bdev", 00:06:41.222 "config": [ 00:06:41.222 { 00:06:41.222 "params": { 00:06:41.222 "trtype": "pcie", 00:06:41.222 "traddr": "0000:00:10.0", 00:06:41.222 "name": "Nvme0" 00:06:41.222 }, 00:06:41.222 "method": "bdev_nvme_attach_controller" 00:06:41.222 }, 00:06:41.222 { 00:06:41.222 "params": { 00:06:41.223 "trtype": "pcie", 00:06:41.223 "traddr": "0000:00:11.0", 00:06:41.223 "name": "Nvme1" 00:06:41.223 }, 00:06:41.223 "method": "bdev_nvme_attach_controller" 00:06:41.223 }, 00:06:41.223 { 00:06:41.223 "method": "bdev_wait_for_examine" 00:06:41.223 } 00:06:41.223 ] 00:06:41.223 } 00:06:41.223 ] 00:06:41.223 } 00:06:41.223 [2024-07-24 21:28:26.182273] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.480 [2024-07-24 21:28:26.282127] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.480 [2024-07-24 21:28:26.356502] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:41.995  Copying: 1024/1024 [kB] (average 500 MBps) 00:06:41.995 00:06:41.995 21:28:26 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:06:41.996 21:28:26 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:06:41.996 00:06:41.996 real 0m3.538s 00:06:41.996 user 0m2.549s 00:06:41.996 sys 0m1.134s 00:06:41.996 ************************************ 00:06:41.996 END TEST dd_offset_magic 00:06:41.996 ************************************ 00:06:41.996 21:28:26 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:41.996 21:28:26 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:06:41.996 21:28:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:06:41.996 21:28:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:06:41.996 21:28:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:41.996 21:28:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:06:41.996 21:28:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:06:41.996 21:28:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:06:41.996 21:28:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:06:41.996 21:28:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:06:41.996 21:28:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:06:41.996 21:28:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:06:41.996 21:28:26 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:41.996 { 00:06:41.996 "subsystems": [ 00:06:41.996 { 00:06:41.996 "subsystem": "bdev", 00:06:41.996 "config": [ 00:06:41.996 { 00:06:41.996 "params": { 00:06:41.996 "trtype": "pcie", 00:06:41.996 "traddr": "0000:00:10.0", 00:06:41.996 "name": "Nvme0" 00:06:41.996 }, 00:06:41.996 "method": "bdev_nvme_attach_controller" 00:06:41.996 }, 00:06:41.996 { 00:06:41.996 "params": { 00:06:41.996 "trtype": "pcie", 00:06:41.996 "traddr": "0000:00:11.0", 00:06:41.996 "name": "Nvme1" 00:06:41.996 }, 00:06:41.996 "method": "bdev_nvme_attach_controller" 00:06:41.996 }, 00:06:41.996 { 00:06:41.996 "method": "bdev_wait_for_examine" 00:06:41.996 } 00:06:41.996 ] 00:06:41.996 } 00:06:41.996 ] 00:06:41.996 } 00:06:41.996 [2024-07-24 21:28:26.926126] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:06:41.996 [2024-07-24 21:28:26.926442] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63050 ] 00:06:42.254 [2024-07-24 21:28:27.065503] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.254 [2024-07-24 21:28:27.162140] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.254 [2024-07-24 21:28:27.232816] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:42.769  Copying: 5120/5120 [kB] (average 1000 MBps) 00:06:42.769 00:06:42.769 21:28:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:06:42.769 21:28:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:06:42.769 21:28:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:06:42.769 21:28:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:06:42.769 21:28:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:06:42.769 21:28:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:06:42.769 21:28:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:06:42.769 21:28:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:06:42.769 21:28:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:06:42.769 21:28:27 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:43.027 [2024-07-24 21:28:27.791789] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:06:43.027 [2024-07-24 21:28:27.791921] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63071 ] 00:06:43.027 { 00:06:43.027 "subsystems": [ 00:06:43.027 { 00:06:43.027 "subsystem": "bdev", 00:06:43.027 "config": [ 00:06:43.027 { 00:06:43.027 "params": { 00:06:43.027 "trtype": "pcie", 00:06:43.027 "traddr": "0000:00:10.0", 00:06:43.027 "name": "Nvme0" 00:06:43.027 }, 00:06:43.027 "method": "bdev_nvme_attach_controller" 00:06:43.027 }, 00:06:43.027 { 00:06:43.027 "params": { 00:06:43.027 "trtype": "pcie", 00:06:43.027 "traddr": "0000:00:11.0", 00:06:43.027 "name": "Nvme1" 00:06:43.027 }, 00:06:43.027 "method": "bdev_nvme_attach_controller" 00:06:43.027 }, 00:06:43.027 { 00:06:43.027 "method": "bdev_wait_for_examine" 00:06:43.027 } 00:06:43.027 ] 00:06:43.027 } 00:06:43.027 ] 00:06:43.027 } 00:06:43.027 [2024-07-24 21:28:27.930095] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.286 [2024-07-24 21:28:28.036034] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.286 [2024-07-24 21:28:28.107227] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:43.802  Copying: 5120/5120 [kB] (average 625 MBps) 00:06:43.802 00:06:43.802 21:28:28 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:06:43.802 00:06:43.802 real 0m8.396s 00:06:43.802 user 0m6.137s 00:06:43.802 sys 0m4.038s 00:06:43.802 21:28:28 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:43.802 ************************************ 00:06:43.802 END TEST spdk_dd_bdev_to_bdev 00:06:43.802 ************************************ 00:06:43.802 21:28:28 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:43.802 21:28:28 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:06:43.802 21:28:28 spdk_dd -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:06:43.802 21:28:28 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:43.802 21:28:28 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:43.802 21:28:28 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:43.802 ************************************ 00:06:43.802 START TEST spdk_dd_uring 00:06:43.802 ************************************ 00:06:43.802 21:28:28 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:06:43.802 * Looking for test storage... 00:06:43.802 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:43.802 21:28:28 spdk_dd.spdk_dd_uring -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:43.802 21:28:28 spdk_dd.spdk_dd_uring -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:43.802 21:28:28 spdk_dd.spdk_dd_uring -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:43.802 21:28:28 spdk_dd.spdk_dd_uring -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:43.802 21:28:28 spdk_dd.spdk_dd_uring -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.802 21:28:28 spdk_dd.spdk_dd_uring -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.802 21:28:28 spdk_dd.spdk_dd_uring -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.802 21:28:28 spdk_dd.spdk_dd_uring -- paths/export.sh@5 -- # export PATH 00:06:43.802 21:28:28 spdk_dd.spdk_dd_uring -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.802 21:28:28 spdk_dd.spdk_dd_uring -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:06:43.802 21:28:28 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:43.802 21:28:28 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:43.802 21:28:28 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:06:43.802 ************************************ 00:06:43.802 START TEST dd_uring_copy 00:06:43.802 ************************************ 00:06:43.802 21:28:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1125 -- # uring_zram_copy 00:06:43.802 21:28:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@15 -- # local zram_dev_id 00:06:43.802 21:28:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@16 -- # local magic 00:06:43.803 21:28:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:06:43.803 21:28:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:06:43.803 21:28:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@19 -- # local verify_magic 00:06:43.803 21:28:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@21 -- # init_zram 00:06:43.803 21:28:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@159 -- # [[ -e /sys/class/zram-control ]] 00:06:43.803 21:28:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@160 -- # return 00:06:43.803 21:28:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # create_zram_dev 00:06:43.803 21:28:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@164 -- # cat /sys/class/zram-control/hot_add 00:06:43.803 21:28:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # zram_dev_id=1 00:06:43.803 21:28:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:06:43.803 21:28:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@177 -- # local id=1 00:06:43.803 21:28:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@178 -- # local size=512M 00:06:43.803 21:28:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@180 -- # [[ -e /sys/block/zram1 ]] 00:06:43.803 21:28:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@182 -- # echo 512M 00:06:43.803 21:28:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:06:43.803 21:28:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:06:43.803 21:28:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:06:43.803 21:28:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:06:43.803 21:28:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:06:43.803 21:28:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:06:43.803 21:28:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # gen_bytes 1024 00:06:43.803 21:28:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@98 -- # xtrace_disable 00:06:43.803 21:28:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:44.061 21:28:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # magic=3w8430zoue1ow5sk4edneic5fjhfkse7hxwh1trul17pij0drm8vczx3n7lcoacevovnf66yoawjzjmzqkkiqmijrjbstun4jm2uvsizt10j4s2x2i1ldsrlk0u22ecxyqfs615mqxm09l9e8xrrlxd3ftcw6sgsgklggs5odi7l1n0cx2tvbxyhd3rgm33ipw4zdudree63kpprj21mpqnm56no54gr7ai7xscxr5djrkjehdwg52lv93qbcq5iaonitee157qyi005tx69y9otmu7vztmgflh59zirhgrenuhb1t0muzg62a3wcki4iu3eexel191oi3aktdrtw2j4qmjw6j2s12856t69devn4jfsxas9x8kgsc1hhrsebdqrcdis30n91laz8025m4fd754czysk5i2u6nmczdwix6ythw7ddl043me32kon8ufk0caxfczhnfj9y7jvjp3m1jqrtl19imzd412o5s3e63iu9ouqk7x8g5zadw82mkhtqz0b3urbkne5o43ssz9fp21q8tfckuy25hgzb11dh9r51rh0muppafxge7awesf26mdl95f9mlzysokdv0wg8kffioihw2kfaltfjpbtbxwc70ltma3zuumj2n286sdcauqbjtt40swp2h2z6dxifrjcd5ws0brcog395ifi7bqz10f8s9vlsdbk0b35p5h3pa1xagpif3vk0ap9kv1r6oyxql5cqpu73fqqehuonni7a671lr8918zm5djqgfy2a9jyxv0w67ac45k2c11o08hh5p5f2mhxij8pl1wvps9q33qbcw79wsqex4y0aonzy31r47q0ixb2v8l7ik56c8pdwkjcltpfolbmgjdrjo2scfdcamjc87l377o18zlm2z8by28lrj5dzxlyauvuk36iw1rs9mtbhuqq1kechaojpynh02ehss30k9ncmj0jgb6qx3yatzd06s2pg61ob980hn8dhjfdtwe4nr6ptk64k68avnbdgkwmqbxo 00:06:44.061 21:28:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@42 -- # echo 3w8430zoue1ow5sk4edneic5fjhfkse7hxwh1trul17pij0drm8vczx3n7lcoacevovnf66yoawjzjmzqkkiqmijrjbstun4jm2uvsizt10j4s2x2i1ldsrlk0u22ecxyqfs615mqxm09l9e8xrrlxd3ftcw6sgsgklggs5odi7l1n0cx2tvbxyhd3rgm33ipw4zdudree63kpprj21mpqnm56no54gr7ai7xscxr5djrkjehdwg52lv93qbcq5iaonitee157qyi005tx69y9otmu7vztmgflh59zirhgrenuhb1t0muzg62a3wcki4iu3eexel191oi3aktdrtw2j4qmjw6j2s12856t69devn4jfsxas9x8kgsc1hhrsebdqrcdis30n91laz8025m4fd754czysk5i2u6nmczdwix6ythw7ddl043me32kon8ufk0caxfczhnfj9y7jvjp3m1jqrtl19imzd412o5s3e63iu9ouqk7x8g5zadw82mkhtqz0b3urbkne5o43ssz9fp21q8tfckuy25hgzb11dh9r51rh0muppafxge7awesf26mdl95f9mlzysokdv0wg8kffioihw2kfaltfjpbtbxwc70ltma3zuumj2n286sdcauqbjtt40swp2h2z6dxifrjcd5ws0brcog395ifi7bqz10f8s9vlsdbk0b35p5h3pa1xagpif3vk0ap9kv1r6oyxql5cqpu73fqqehuonni7a671lr8918zm5djqgfy2a9jyxv0w67ac45k2c11o08hh5p5f2mhxij8pl1wvps9q33qbcw79wsqex4y0aonzy31r47q0ixb2v8l7ik56c8pdwkjcltpfolbmgjdrjo2scfdcamjc87l377o18zlm2z8by28lrj5dzxlyauvuk36iw1rs9mtbhuqq1kechaojpynh02ehss30k9ncmj0jgb6qx3yatzd06s2pg61ob980hn8dhjfdtwe4nr6ptk64k68avnbdgkwmqbxo 00:06:44.061 21:28:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:06:44.061 [2024-07-24 21:28:28.862721] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:06:44.061 [2024-07-24 21:28:28.863097] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63141 ] 00:06:44.061 [2024-07-24 21:28:28.997555] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.319 [2024-07-24 21:28:29.102255] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.319 [2024-07-24 21:28:29.173094] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:45.451  Copying: 511/511 [MB] (average 1410 MBps) 00:06:45.451 00:06:45.451 21:28:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # gen_conf 00:06:45.451 21:28:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:06:45.451 21:28:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:45.451 21:28:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:45.451 [2024-07-24 21:28:30.397132] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:06:45.451 [2024-07-24 21:28:30.397239] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63162 ] 00:06:45.451 { 00:06:45.451 "subsystems": [ 00:06:45.451 { 00:06:45.451 "subsystem": "bdev", 00:06:45.451 "config": [ 00:06:45.451 { 00:06:45.451 "params": { 00:06:45.451 "block_size": 512, 00:06:45.451 "num_blocks": 1048576, 00:06:45.451 "name": "malloc0" 00:06:45.451 }, 00:06:45.451 "method": "bdev_malloc_create" 00:06:45.451 }, 00:06:45.451 { 00:06:45.451 "params": { 00:06:45.451 "filename": "/dev/zram1", 00:06:45.451 "name": "uring0" 00:06:45.451 }, 00:06:45.451 "method": "bdev_uring_create" 00:06:45.451 }, 00:06:45.451 { 00:06:45.451 "method": "bdev_wait_for_examine" 00:06:45.451 } 00:06:45.451 ] 00:06:45.451 } 00:06:45.451 ] 00:06:45.451 } 00:06:45.709 [2024-07-24 21:28:30.533122] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.709 [2024-07-24 21:28:30.624587] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.709 [2024-07-24 21:28:30.697270] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:48.852  Copying: 237/512 [MB] (237 MBps) Copying: 457/512 [MB] (220 MBps) Copying: 512/512 [MB] (average 229 MBps) 00:06:48.852 00:06:48.852 21:28:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:06:48.852 21:28:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # gen_conf 00:06:48.852 21:28:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:48.852 21:28:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:48.852 [2024-07-24 21:28:33.827509] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:06:48.852 [2024-07-24 21:28:33.827899] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63212 ] 00:06:48.852 { 00:06:48.852 "subsystems": [ 00:06:48.852 { 00:06:48.852 "subsystem": "bdev", 00:06:48.852 "config": [ 00:06:48.852 { 00:06:48.852 "params": { 00:06:48.852 "block_size": 512, 00:06:48.852 "num_blocks": 1048576, 00:06:48.852 "name": "malloc0" 00:06:48.852 }, 00:06:48.852 "method": "bdev_malloc_create" 00:06:48.852 }, 00:06:48.852 { 00:06:48.852 "params": { 00:06:48.852 "filename": "/dev/zram1", 00:06:48.852 "name": "uring0" 00:06:48.852 }, 00:06:48.853 "method": "bdev_uring_create" 00:06:48.853 }, 00:06:48.853 { 00:06:48.853 "method": "bdev_wait_for_examine" 00:06:48.853 } 00:06:48.853 ] 00:06:48.853 } 00:06:48.853 ] 00:06:48.853 } 00:06:49.111 [2024-07-24 21:28:33.965674] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.111 [2024-07-24 21:28:34.107811] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.369 [2024-07-24 21:28:34.183176] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:53.180  Copying: 165/512 [MB] (165 MBps) Copying: 328/512 [MB] (162 MBps) Copying: 491/512 [MB] (163 MBps) Copying: 512/512 [MB] (average 164 MBps) 00:06:53.180 00:06:53.180 21:28:38 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:06:53.180 21:28:38 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@66 -- # [[ 3w8430zoue1ow5sk4edneic5fjhfkse7hxwh1trul17pij0drm8vczx3n7lcoacevovnf66yoawjzjmzqkkiqmijrjbstun4jm2uvsizt10j4s2x2i1ldsrlk0u22ecxyqfs615mqxm09l9e8xrrlxd3ftcw6sgsgklggs5odi7l1n0cx2tvbxyhd3rgm33ipw4zdudree63kpprj21mpqnm56no54gr7ai7xscxr5djrkjehdwg52lv93qbcq5iaonitee157qyi005tx69y9otmu7vztmgflh59zirhgrenuhb1t0muzg62a3wcki4iu3eexel191oi3aktdrtw2j4qmjw6j2s12856t69devn4jfsxas9x8kgsc1hhrsebdqrcdis30n91laz8025m4fd754czysk5i2u6nmczdwix6ythw7ddl043me32kon8ufk0caxfczhnfj9y7jvjp3m1jqrtl19imzd412o5s3e63iu9ouqk7x8g5zadw82mkhtqz0b3urbkne5o43ssz9fp21q8tfckuy25hgzb11dh9r51rh0muppafxge7awesf26mdl95f9mlzysokdv0wg8kffioihw2kfaltfjpbtbxwc70ltma3zuumj2n286sdcauqbjtt40swp2h2z6dxifrjcd5ws0brcog395ifi7bqz10f8s9vlsdbk0b35p5h3pa1xagpif3vk0ap9kv1r6oyxql5cqpu73fqqehuonni7a671lr8918zm5djqgfy2a9jyxv0w67ac45k2c11o08hh5p5f2mhxij8pl1wvps9q33qbcw79wsqex4y0aonzy31r47q0ixb2v8l7ik56c8pdwkjcltpfolbmgjdrjo2scfdcamjc87l377o18zlm2z8by28lrj5dzxlyauvuk36iw1rs9mtbhuqq1kechaojpynh02ehss30k9ncmj0jgb6qx3yatzd06s2pg61ob980hn8dhjfdtwe4nr6ptk64k68avnbdgkwmqbxo == \3\w\8\4\3\0\z\o\u\e\1\o\w\5\s\k\4\e\d\n\e\i\c\5\f\j\h\f\k\s\e\7\h\x\w\h\1\t\r\u\l\1\7\p\i\j\0\d\r\m\8\v\c\z\x\3\n\7\l\c\o\a\c\e\v\o\v\n\f\6\6\y\o\a\w\j\z\j\m\z\q\k\k\i\q\m\i\j\r\j\b\s\t\u\n\4\j\m\2\u\v\s\i\z\t\1\0\j\4\s\2\x\2\i\1\l\d\s\r\l\k\0\u\2\2\e\c\x\y\q\f\s\6\1\5\m\q\x\m\0\9\l\9\e\8\x\r\r\l\x\d\3\f\t\c\w\6\s\g\s\g\k\l\g\g\s\5\o\d\i\7\l\1\n\0\c\x\2\t\v\b\x\y\h\d\3\r\g\m\3\3\i\p\w\4\z\d\u\d\r\e\e\6\3\k\p\p\r\j\2\1\m\p\q\n\m\5\6\n\o\5\4\g\r\7\a\i\7\x\s\c\x\r\5\d\j\r\k\j\e\h\d\w\g\5\2\l\v\9\3\q\b\c\q\5\i\a\o\n\i\t\e\e\1\5\7\q\y\i\0\0\5\t\x\6\9\y\9\o\t\m\u\7\v\z\t\m\g\f\l\h\5\9\z\i\r\h\g\r\e\n\u\h\b\1\t\0\m\u\z\g\6\2\a\3\w\c\k\i\4\i\u\3\e\e\x\e\l\1\9\1\o\i\3\a\k\t\d\r\t\w\2\j\4\q\m\j\w\6\j\2\s\1\2\8\5\6\t\6\9\d\e\v\n\4\j\f\s\x\a\s\9\x\8\k\g\s\c\1\h\h\r\s\e\b\d\q\r\c\d\i\s\3\0\n\9\1\l\a\z\8\0\2\5\m\4\f\d\7\5\4\c\z\y\s\k\5\i\2\u\6\n\m\c\z\d\w\i\x\6\y\t\h\w\7\d\d\l\0\4\3\m\e\3\2\k\o\n\8\u\f\k\0\c\a\x\f\c\z\h\n\f\j\9\y\7\j\v\j\p\3\m\1\j\q\r\t\l\1\9\i\m\z\d\4\1\2\o\5\s\3\e\6\3\i\u\9\o\u\q\k\7\x\8\g\5\z\a\d\w\8\2\m\k\h\t\q\z\0\b\3\u\r\b\k\n\e\5\o\4\3\s\s\z\9\f\p\2\1\q\8\t\f\c\k\u\y\2\5\h\g\z\b\1\1\d\h\9\r\5\1\r\h\0\m\u\p\p\a\f\x\g\e\7\a\w\e\s\f\2\6\m\d\l\9\5\f\9\m\l\z\y\s\o\k\d\v\0\w\g\8\k\f\f\i\o\i\h\w\2\k\f\a\l\t\f\j\p\b\t\b\x\w\c\7\0\l\t\m\a\3\z\u\u\m\j\2\n\2\8\6\s\d\c\a\u\q\b\j\t\t\4\0\s\w\p\2\h\2\z\6\d\x\i\f\r\j\c\d\5\w\s\0\b\r\c\o\g\3\9\5\i\f\i\7\b\q\z\1\0\f\8\s\9\v\l\s\d\b\k\0\b\3\5\p\5\h\3\p\a\1\x\a\g\p\i\f\3\v\k\0\a\p\9\k\v\1\r\6\o\y\x\q\l\5\c\q\p\u\7\3\f\q\q\e\h\u\o\n\n\i\7\a\6\7\1\l\r\8\9\1\8\z\m\5\d\j\q\g\f\y\2\a\9\j\y\x\v\0\w\6\7\a\c\4\5\k\2\c\1\1\o\0\8\h\h\5\p\5\f\2\m\h\x\i\j\8\p\l\1\w\v\p\s\9\q\3\3\q\b\c\w\7\9\w\s\q\e\x\4\y\0\a\o\n\z\y\3\1\r\4\7\q\0\i\x\b\2\v\8\l\7\i\k\5\6\c\8\p\d\w\k\j\c\l\t\p\f\o\l\b\m\g\j\d\r\j\o\2\s\c\f\d\c\a\m\j\c\8\7\l\3\7\7\o\1\8\z\l\m\2\z\8\b\y\2\8\l\r\j\5\d\z\x\l\y\a\u\v\u\k\3\6\i\w\1\r\s\9\m\t\b\h\u\q\q\1\k\e\c\h\a\o\j\p\y\n\h\0\2\e\h\s\s\3\0\k\9\n\c\m\j\0\j\g\b\6\q\x\3\y\a\t\z\d\0\6\s\2\p\g\6\1\o\b\9\8\0\h\n\8\d\h\j\f\d\t\w\e\4\n\r\6\p\t\k\6\4\k\6\8\a\v\n\b\d\g\k\w\m\q\b\x\o ]] 00:06:53.180 21:28:38 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:06:53.180 21:28:38 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@69 -- # [[ 3w8430zoue1ow5sk4edneic5fjhfkse7hxwh1trul17pij0drm8vczx3n7lcoacevovnf66yoawjzjmzqkkiqmijrjbstun4jm2uvsizt10j4s2x2i1ldsrlk0u22ecxyqfs615mqxm09l9e8xrrlxd3ftcw6sgsgklggs5odi7l1n0cx2tvbxyhd3rgm33ipw4zdudree63kpprj21mpqnm56no54gr7ai7xscxr5djrkjehdwg52lv93qbcq5iaonitee157qyi005tx69y9otmu7vztmgflh59zirhgrenuhb1t0muzg62a3wcki4iu3eexel191oi3aktdrtw2j4qmjw6j2s12856t69devn4jfsxas9x8kgsc1hhrsebdqrcdis30n91laz8025m4fd754czysk5i2u6nmczdwix6ythw7ddl043me32kon8ufk0caxfczhnfj9y7jvjp3m1jqrtl19imzd412o5s3e63iu9ouqk7x8g5zadw82mkhtqz0b3urbkne5o43ssz9fp21q8tfckuy25hgzb11dh9r51rh0muppafxge7awesf26mdl95f9mlzysokdv0wg8kffioihw2kfaltfjpbtbxwc70ltma3zuumj2n286sdcauqbjtt40swp2h2z6dxifrjcd5ws0brcog395ifi7bqz10f8s9vlsdbk0b35p5h3pa1xagpif3vk0ap9kv1r6oyxql5cqpu73fqqehuonni7a671lr8918zm5djqgfy2a9jyxv0w67ac45k2c11o08hh5p5f2mhxij8pl1wvps9q33qbcw79wsqex4y0aonzy31r47q0ixb2v8l7ik56c8pdwkjcltpfolbmgjdrjo2scfdcamjc87l377o18zlm2z8by28lrj5dzxlyauvuk36iw1rs9mtbhuqq1kechaojpynh02ehss30k9ncmj0jgb6qx3yatzd06s2pg61ob980hn8dhjfdtwe4nr6ptk64k68avnbdgkwmqbxo == \3\w\8\4\3\0\z\o\u\e\1\o\w\5\s\k\4\e\d\n\e\i\c\5\f\j\h\f\k\s\e\7\h\x\w\h\1\t\r\u\l\1\7\p\i\j\0\d\r\m\8\v\c\z\x\3\n\7\l\c\o\a\c\e\v\o\v\n\f\6\6\y\o\a\w\j\z\j\m\z\q\k\k\i\q\m\i\j\r\j\b\s\t\u\n\4\j\m\2\u\v\s\i\z\t\1\0\j\4\s\2\x\2\i\1\l\d\s\r\l\k\0\u\2\2\e\c\x\y\q\f\s\6\1\5\m\q\x\m\0\9\l\9\e\8\x\r\r\l\x\d\3\f\t\c\w\6\s\g\s\g\k\l\g\g\s\5\o\d\i\7\l\1\n\0\c\x\2\t\v\b\x\y\h\d\3\r\g\m\3\3\i\p\w\4\z\d\u\d\r\e\e\6\3\k\p\p\r\j\2\1\m\p\q\n\m\5\6\n\o\5\4\g\r\7\a\i\7\x\s\c\x\r\5\d\j\r\k\j\e\h\d\w\g\5\2\l\v\9\3\q\b\c\q\5\i\a\o\n\i\t\e\e\1\5\7\q\y\i\0\0\5\t\x\6\9\y\9\o\t\m\u\7\v\z\t\m\g\f\l\h\5\9\z\i\r\h\g\r\e\n\u\h\b\1\t\0\m\u\z\g\6\2\a\3\w\c\k\i\4\i\u\3\e\e\x\e\l\1\9\1\o\i\3\a\k\t\d\r\t\w\2\j\4\q\m\j\w\6\j\2\s\1\2\8\5\6\t\6\9\d\e\v\n\4\j\f\s\x\a\s\9\x\8\k\g\s\c\1\h\h\r\s\e\b\d\q\r\c\d\i\s\3\0\n\9\1\l\a\z\8\0\2\5\m\4\f\d\7\5\4\c\z\y\s\k\5\i\2\u\6\n\m\c\z\d\w\i\x\6\y\t\h\w\7\d\d\l\0\4\3\m\e\3\2\k\o\n\8\u\f\k\0\c\a\x\f\c\z\h\n\f\j\9\y\7\j\v\j\p\3\m\1\j\q\r\t\l\1\9\i\m\z\d\4\1\2\o\5\s\3\e\6\3\i\u\9\o\u\q\k\7\x\8\g\5\z\a\d\w\8\2\m\k\h\t\q\z\0\b\3\u\r\b\k\n\e\5\o\4\3\s\s\z\9\f\p\2\1\q\8\t\f\c\k\u\y\2\5\h\g\z\b\1\1\d\h\9\r\5\1\r\h\0\m\u\p\p\a\f\x\g\e\7\a\w\e\s\f\2\6\m\d\l\9\5\f\9\m\l\z\y\s\o\k\d\v\0\w\g\8\k\f\f\i\o\i\h\w\2\k\f\a\l\t\f\j\p\b\t\b\x\w\c\7\0\l\t\m\a\3\z\u\u\m\j\2\n\2\8\6\s\d\c\a\u\q\b\j\t\t\4\0\s\w\p\2\h\2\z\6\d\x\i\f\r\j\c\d\5\w\s\0\b\r\c\o\g\3\9\5\i\f\i\7\b\q\z\1\0\f\8\s\9\v\l\s\d\b\k\0\b\3\5\p\5\h\3\p\a\1\x\a\g\p\i\f\3\v\k\0\a\p\9\k\v\1\r\6\o\y\x\q\l\5\c\q\p\u\7\3\f\q\q\e\h\u\o\n\n\i\7\a\6\7\1\l\r\8\9\1\8\z\m\5\d\j\q\g\f\y\2\a\9\j\y\x\v\0\w\6\7\a\c\4\5\k\2\c\1\1\o\0\8\h\h\5\p\5\f\2\m\h\x\i\j\8\p\l\1\w\v\p\s\9\q\3\3\q\b\c\w\7\9\w\s\q\e\x\4\y\0\a\o\n\z\y\3\1\r\4\7\q\0\i\x\b\2\v\8\l\7\i\k\5\6\c\8\p\d\w\k\j\c\l\t\p\f\o\l\b\m\g\j\d\r\j\o\2\s\c\f\d\c\a\m\j\c\8\7\l\3\7\7\o\1\8\z\l\m\2\z\8\b\y\2\8\l\r\j\5\d\z\x\l\y\a\u\v\u\k\3\6\i\w\1\r\s\9\m\t\b\h\u\q\q\1\k\e\c\h\a\o\j\p\y\n\h\0\2\e\h\s\s\3\0\k\9\n\c\m\j\0\j\g\b\6\q\x\3\y\a\t\z\d\0\6\s\2\p\g\6\1\o\b\9\8\0\h\n\8\d\h\j\f\d\t\w\e\4\n\r\6\p\t\k\6\4\k\6\8\a\v\n\b\d\g\k\w\m\q\b\x\o ]] 00:06:53.180 21:28:38 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:06:53.747 21:28:38 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:06:53.747 21:28:38 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # gen_conf 00:06:53.747 21:28:38 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:53.747 21:28:38 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:53.747 [2024-07-24 21:28:38.604363] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:06:53.747 [2024-07-24 21:28:38.604458] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63291 ] 00:06:53.747 { 00:06:53.747 "subsystems": [ 00:06:53.747 { 00:06:53.747 "subsystem": "bdev", 00:06:53.747 "config": [ 00:06:53.747 { 00:06:53.747 "params": { 00:06:53.747 "block_size": 512, 00:06:53.747 "num_blocks": 1048576, 00:06:53.747 "name": "malloc0" 00:06:53.747 }, 00:06:53.747 "method": "bdev_malloc_create" 00:06:53.747 }, 00:06:53.747 { 00:06:53.747 "params": { 00:06:53.747 "filename": "/dev/zram1", 00:06:53.747 "name": "uring0" 00:06:53.747 }, 00:06:53.747 "method": "bdev_uring_create" 00:06:53.747 }, 00:06:53.747 { 00:06:53.747 "method": "bdev_wait_for_examine" 00:06:53.747 } 00:06:53.747 ] 00:06:53.747 } 00:06:53.747 ] 00:06:53.747 } 00:06:53.747 [2024-07-24 21:28:38.742433] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.005 [2024-07-24 21:28:38.857670] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.005 [2024-07-24 21:28:38.930081] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:57.815  Copying: 168/512 [MB] (168 MBps) Copying: 335/512 [MB] (166 MBps) Copying: 501/512 [MB] (165 MBps) Copying: 512/512 [MB] (average 167 MBps) 00:06:57.815 00:06:58.073 21:28:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:06:58.073 21:28:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:06:58.073 21:28:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:06:58.073 21:28:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:06:58.073 21:28:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:06:58.073 21:28:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # gen_conf 00:06:58.073 21:28:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:58.073 21:28:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:58.073 [2024-07-24 21:28:42.882016] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:06:58.073 [2024-07-24 21:28:42.882159] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63361 ] 00:06:58.073 { 00:06:58.073 "subsystems": [ 00:06:58.073 { 00:06:58.073 "subsystem": "bdev", 00:06:58.073 "config": [ 00:06:58.073 { 00:06:58.073 "params": { 00:06:58.073 "block_size": 512, 00:06:58.073 "num_blocks": 1048576, 00:06:58.073 "name": "malloc0" 00:06:58.073 }, 00:06:58.073 "method": "bdev_malloc_create" 00:06:58.073 }, 00:06:58.073 { 00:06:58.073 "params": { 00:06:58.073 "filename": "/dev/zram1", 00:06:58.073 "name": "uring0" 00:06:58.073 }, 00:06:58.073 "method": "bdev_uring_create" 00:06:58.073 }, 00:06:58.073 { 00:06:58.073 "params": { 00:06:58.073 "name": "uring0" 00:06:58.073 }, 00:06:58.073 "method": "bdev_uring_delete" 00:06:58.073 }, 00:06:58.073 { 00:06:58.073 "method": "bdev_wait_for_examine" 00:06:58.073 } 00:06:58.073 ] 00:06:58.073 } 00:06:58.073 ] 00:06:58.073 } 00:06:58.074 [2024-07-24 21:28:43.022505] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.332 [2024-07-24 21:28:43.133519] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.332 [2024-07-24 21:28:43.207472] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:59.156  Copying: 0/0 [B] (average 0 Bps) 00:06:59.156 00:06:59.156 21:28:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # : 00:06:59.156 21:28:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:06:59.156 21:28:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # gen_conf 00:06:59.156 21:28:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@650 -- # local es=0 00:06:59.156 21:28:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:59.156 21:28:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:06:59.156 21:28:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:59.156 21:28:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:59.156 21:28:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:59.156 21:28:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:59.156 21:28:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:59.156 21:28:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:59.156 21:28:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:59.156 21:28:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:59.156 21:28:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:59.156 21:28:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:06:59.156 [2024-07-24 21:28:44.148261] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:06:59.156 [2024-07-24 21:28:44.148378] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63391 ] 00:06:59.414 { 00:06:59.414 "subsystems": [ 00:06:59.414 { 00:06:59.414 "subsystem": "bdev", 00:06:59.414 "config": [ 00:06:59.414 { 00:06:59.414 "params": { 00:06:59.414 "block_size": 512, 00:06:59.414 "num_blocks": 1048576, 00:06:59.414 "name": "malloc0" 00:06:59.414 }, 00:06:59.414 "method": "bdev_malloc_create" 00:06:59.414 }, 00:06:59.414 { 00:06:59.414 "params": { 00:06:59.414 "filename": "/dev/zram1", 00:06:59.414 "name": "uring0" 00:06:59.414 }, 00:06:59.414 "method": "bdev_uring_create" 00:06:59.414 }, 00:06:59.414 { 00:06:59.414 "params": { 00:06:59.414 "name": "uring0" 00:06:59.414 }, 00:06:59.414 "method": "bdev_uring_delete" 00:06:59.414 }, 00:06:59.414 { 00:06:59.414 "method": "bdev_wait_for_examine" 00:06:59.414 } 00:06:59.414 ] 00:06:59.414 } 00:06:59.414 ] 00:06:59.414 } 00:06:59.414 [2024-07-24 21:28:44.286689] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.414 [2024-07-24 21:28:44.395761] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.671 [2024-07-24 21:28:44.470670] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:59.929 [2024-07-24 21:28:44.735875] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:06:59.929 [2024-07-24 21:28:44.735929] spdk_dd.c: 933:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:06:59.929 [2024-07-24 21:28:44.735958] spdk_dd.c:1090:dd_run: *ERROR*: uring0: No such device 00:06:59.929 [2024-07-24 21:28:44.735970] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:00.187 [2024-07-24 21:28:45.185349] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:00.445 21:28:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@653 -- # es=237 00:07:00.445 21:28:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:00.445 21:28:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@662 -- # es=109 00:07:00.445 21:28:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@663 -- # case "$es" in 00:07:00.445 21:28:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@670 -- # es=1 00:07:00.445 21:28:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:00.445 21:28:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@99 -- # remove_zram_dev 1 00:07:00.445 21:28:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@168 -- # local id=1 00:07:00.445 21:28:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@170 -- # [[ -e /sys/block/zram1 ]] 00:07:00.445 21:28:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@172 -- # echo 1 00:07:00.445 21:28:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@173 -- # echo 1 00:07:00.445 21:28:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:07:00.704 00:07:00.704 ************************************ 00:07:00.704 END TEST dd_uring_copy 00:07:00.704 ************************************ 00:07:00.704 real 0m16.813s 00:07:00.704 user 0m11.275s 00:07:00.704 sys 0m13.552s 00:07:00.704 21:28:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:00.704 21:28:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:00.704 00:07:00.704 real 0m16.966s 00:07:00.704 user 0m11.329s 00:07:00.704 sys 0m13.651s 00:07:00.704 21:28:45 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:00.704 21:28:45 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:07:00.704 ************************************ 00:07:00.704 END TEST spdk_dd_uring 00:07:00.704 ************************************ 00:07:00.704 21:28:45 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:07:00.704 21:28:45 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:00.704 21:28:45 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:00.704 21:28:45 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:00.704 ************************************ 00:07:00.704 START TEST spdk_dd_sparse 00:07:00.704 ************************************ 00:07:00.704 21:28:45 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:07:00.963 * Looking for test storage... 00:07:00.963 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:00.963 21:28:45 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:00.963 21:28:45 spdk_dd.spdk_dd_sparse -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:00.963 21:28:45 spdk_dd.spdk_dd_sparse -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:00.963 21:28:45 spdk_dd.spdk_dd_sparse -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:00.963 21:28:45 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.963 21:28:45 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.963 21:28:45 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.963 21:28:45 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 00:07:00.963 21:28:45 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.963 21:28:45 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:07:00.963 21:28:45 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:07:00.963 21:28:45 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:07:00.963 21:28:45 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:07:00.963 21:28:45 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:07:00.963 21:28:45 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:07:00.963 21:28:45 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:07:00.963 21:28:45 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:07:00.963 21:28:45 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:07:00.963 21:28:45 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:07:00.963 21:28:45 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:07:00.963 1+0 records in 00:07:00.963 1+0 records out 00:07:00.963 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00746468 s, 562 MB/s 00:07:00.963 21:28:45 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:07:00.963 1+0 records in 00:07:00.963 1+0 records out 00:07:00.963 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00542811 s, 773 MB/s 00:07:00.963 21:28:45 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:07:00.963 1+0 records in 00:07:00.963 1+0 records out 00:07:00.963 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00657365 s, 638 MB/s 00:07:00.963 21:28:45 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:07:00.963 21:28:45 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:00.963 21:28:45 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:00.963 21:28:45 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:00.963 ************************************ 00:07:00.963 START TEST dd_sparse_file_to_file 00:07:00.963 ************************************ 00:07:00.963 21:28:45 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1125 -- # file_to_file 00:07:00.963 21:28:45 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:07:00.963 21:28:45 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:07:00.963 21:28:45 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:07:00.963 21:28:45 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:07:00.963 21:28:45 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:07:00.963 21:28:45 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:07:00.963 21:28:45 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:07:00.963 21:28:45 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:07:00.963 21:28:45 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:07:00.963 21:28:45 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:00.963 [2024-07-24 21:28:45.894692] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:07:00.963 [2024-07-24 21:28:45.895335] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63483 ] 00:07:00.963 { 00:07:00.963 "subsystems": [ 00:07:00.963 { 00:07:00.963 "subsystem": "bdev", 00:07:00.963 "config": [ 00:07:00.963 { 00:07:00.963 "params": { 00:07:00.963 "block_size": 4096, 00:07:00.963 "filename": "dd_sparse_aio_disk", 00:07:00.963 "name": "dd_aio" 00:07:00.963 }, 00:07:00.963 "method": "bdev_aio_create" 00:07:00.963 }, 00:07:00.963 { 00:07:00.963 "params": { 00:07:00.963 "lvs_name": "dd_lvstore", 00:07:00.963 "bdev_name": "dd_aio" 00:07:00.963 }, 00:07:00.963 "method": "bdev_lvol_create_lvstore" 00:07:00.963 }, 00:07:00.963 { 00:07:00.963 "method": "bdev_wait_for_examine" 00:07:00.963 } 00:07:00.963 ] 00:07:00.963 } 00:07:00.963 ] 00:07:00.963 } 00:07:01.222 [2024-07-24 21:28:46.029743] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.222 [2024-07-24 21:28:46.161880] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.480 [2024-07-24 21:28:46.236530] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:01.739  Copying: 12/36 [MB] (average 800 MBps) 00:07:01.739 00:07:01.739 21:28:46 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:07:01.739 21:28:46 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:07:01.739 21:28:46 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:07:01.739 21:28:46 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:07:01.739 21:28:46 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:07:01.739 21:28:46 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:07:01.739 21:28:46 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:07:01.739 21:28:46 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:07:01.997 21:28:46 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:07:01.997 21:28:46 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:07:01.997 00:07:01.997 real 0m0.904s 00:07:01.997 user 0m0.601s 00:07:01.997 sys 0m0.467s 00:07:01.997 21:28:46 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:01.997 21:28:46 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:01.997 ************************************ 00:07:01.997 END TEST dd_sparse_file_to_file 00:07:01.997 ************************************ 00:07:01.997 21:28:46 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:07:01.997 21:28:46 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:01.997 21:28:46 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:01.997 21:28:46 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:01.997 ************************************ 00:07:01.997 START TEST dd_sparse_file_to_bdev 00:07:01.997 ************************************ 00:07:01.997 21:28:46 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1125 -- # file_to_bdev 00:07:01.997 21:28:46 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:07:01.997 21:28:46 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:07:01.997 21:28:46 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true') 00:07:01.997 21:28:46 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:07:01.997 21:28:46 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:07:01.997 21:28:46 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:07:01.997 21:28:46 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:07:01.997 21:28:46 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:01.997 [2024-07-24 21:28:46.850478] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:07:01.997 [2024-07-24 21:28:46.850583] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63531 ] 00:07:01.997 { 00:07:01.997 "subsystems": [ 00:07:01.997 { 00:07:01.997 "subsystem": "bdev", 00:07:01.997 "config": [ 00:07:01.997 { 00:07:01.997 "params": { 00:07:01.997 "block_size": 4096, 00:07:01.997 "filename": "dd_sparse_aio_disk", 00:07:01.998 "name": "dd_aio" 00:07:01.998 }, 00:07:01.998 "method": "bdev_aio_create" 00:07:01.998 }, 00:07:01.998 { 00:07:01.998 "params": { 00:07:01.998 "lvs_name": "dd_lvstore", 00:07:01.998 "lvol_name": "dd_lvol", 00:07:01.998 "size_in_mib": 36, 00:07:01.998 "thin_provision": true 00:07:01.998 }, 00:07:01.998 "method": "bdev_lvol_create" 00:07:01.998 }, 00:07:01.998 { 00:07:01.998 "method": "bdev_wait_for_examine" 00:07:01.998 } 00:07:01.998 ] 00:07:01.998 } 00:07:01.998 ] 00:07:01.998 } 00:07:01.998 [2024-07-24 21:28:46.990657] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.255 [2024-07-24 21:28:47.125960] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.255 [2024-07-24 21:28:47.205021] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:02.771  Copying: 12/36 [MB] (average 413 MBps) 00:07:02.771 00:07:02.771 00:07:02.771 real 0m0.861s 00:07:02.771 user 0m0.565s 00:07:02.771 sys 0m0.463s 00:07:02.771 21:28:47 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:02.771 21:28:47 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:02.771 ************************************ 00:07:02.771 END TEST dd_sparse_file_to_bdev 00:07:02.771 ************************************ 00:07:02.771 21:28:47 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:07:02.771 21:28:47 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:02.771 21:28:47 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:02.771 21:28:47 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:02.771 ************************************ 00:07:02.771 START TEST dd_sparse_bdev_to_file 00:07:02.771 ************************************ 00:07:02.771 21:28:47 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1125 -- # bdev_to_file 00:07:02.771 21:28:47 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:07:02.771 21:28:47 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:07:02.771 21:28:47 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:07:02.771 21:28:47 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:07:02.771 21:28:47 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:07:02.771 21:28:47 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:07:02.771 21:28:47 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:07:02.771 21:28:47 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:02.771 [2024-07-24 21:28:47.771243] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:07:03.029 [2024-07-24 21:28:47.772083] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63563 ] 00:07:03.029 { 00:07:03.029 "subsystems": [ 00:07:03.029 { 00:07:03.029 "subsystem": "bdev", 00:07:03.029 "config": [ 00:07:03.029 { 00:07:03.029 "params": { 00:07:03.029 "block_size": 4096, 00:07:03.029 "filename": "dd_sparse_aio_disk", 00:07:03.029 "name": "dd_aio" 00:07:03.029 }, 00:07:03.029 "method": "bdev_aio_create" 00:07:03.029 }, 00:07:03.029 { 00:07:03.029 "method": "bdev_wait_for_examine" 00:07:03.029 } 00:07:03.029 ] 00:07:03.029 } 00:07:03.029 ] 00:07:03.029 } 00:07:03.029 [2024-07-24 21:28:47.912045] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.301 [2024-07-24 21:28:48.037467] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.301 [2024-07-24 21:28:48.120615] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:03.919  Copying: 12/36 [MB] (average 857 MBps) 00:07:03.919 00:07:03.919 21:28:48 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:07:03.919 21:28:48 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:07:03.919 21:28:48 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:07:03.919 21:28:48 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:07:03.919 21:28:48 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:07:03.919 21:28:48 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:07:03.919 21:28:48 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:07:03.919 21:28:48 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:07:03.919 21:28:48 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:07:03.919 21:28:48 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:07:03.919 00:07:03.919 real 0m0.906s 00:07:03.919 user 0m0.569s 00:07:03.919 sys 0m0.509s 00:07:03.919 21:28:48 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:03.919 21:28:48 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:03.919 ************************************ 00:07:03.919 END TEST dd_sparse_bdev_to_file 00:07:03.919 ************************************ 00:07:03.919 21:28:48 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:07:03.919 21:28:48 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:07:03.919 21:28:48 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:07:03.919 21:28:48 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:07:03.919 21:28:48 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:07:03.919 00:07:03.919 real 0m2.997s 00:07:03.919 user 0m1.826s 00:07:03.919 sys 0m1.649s 00:07:03.919 21:28:48 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:03.919 21:28:48 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:03.919 ************************************ 00:07:03.919 END TEST spdk_dd_sparse 00:07:03.919 ************************************ 00:07:03.919 21:28:48 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:07:03.919 21:28:48 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:03.919 21:28:48 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:03.919 21:28:48 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:03.919 ************************************ 00:07:03.919 START TEST spdk_dd_negative 00:07:03.919 ************************************ 00:07:03.919 21:28:48 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:07:03.919 * Looking for test storage... 00:07:03.919 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:03.919 21:28:48 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:03.919 21:28:48 spdk_dd.spdk_dd_negative -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:03.919 21:28:48 spdk_dd.spdk_dd_negative -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:03.919 21:28:48 spdk_dd.spdk_dd_negative -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:03.919 21:28:48 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:03.919 21:28:48 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:03.919 21:28:48 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:03.919 21:28:48 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 00:07:03.920 21:28:48 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:03.920 21:28:48 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@101 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:03.920 21:28:48 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@102 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:03.920 21:28:48 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@104 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:03.920 21:28:48 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@105 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:03.920 21:28:48 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@107 -- # run_test dd_invalid_arguments invalid_arguments 00:07:03.920 21:28:48 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:03.920 21:28:48 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:03.920 21:28:48 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:03.920 ************************************ 00:07:03.920 START TEST dd_invalid_arguments 00:07:03.920 ************************************ 00:07:03.920 21:28:48 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1125 -- # invalid_arguments 00:07:03.920 21:28:48 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:07:03.920 21:28:48 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@650 -- # local es=0 00:07:03.920 21:28:48 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:07:03.920 21:28:48 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:03.920 21:28:48 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:03.920 21:28:48 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:03.920 21:28:48 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:03.920 21:28:48 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:03.920 21:28:48 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:03.920 21:28:48 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:03.920 21:28:48 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:03.920 21:28:48 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:07:04.178 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:07:04.178 00:07:04.178 CPU options: 00:07:04.178 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:07:04.178 (like [0,1,10]) 00:07:04.178 --lcores lcore to CPU mapping list. The list is in the format: 00:07:04.178 [<,lcores[@CPUs]>...] 00:07:04.178 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:07:04.178 Within the group, '-' is used for range separator, 00:07:04.178 ',' is used for single number separator. 00:07:04.178 '( )' can be omitted for single element group, 00:07:04.178 '@' can be omitted if cpus and lcores have the same value 00:07:04.178 --disable-cpumask-locks Disable CPU core lock files. 00:07:04.178 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:07:04.178 pollers in the app support interrupt mode) 00:07:04.178 -p, --main-core main (primary) core for DPDK 00:07:04.178 00:07:04.178 Configuration options: 00:07:04.178 -c, --config, --json JSON config file 00:07:04.178 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:07:04.178 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:07:04.178 --wait-for-rpc wait for RPCs to initialize subsystems 00:07:04.178 --rpcs-allowed comma-separated list of permitted RPCS 00:07:04.178 --json-ignore-init-errors don't exit on invalid config entry 00:07:04.178 00:07:04.178 Memory options: 00:07:04.178 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:07:04.178 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:07:04.178 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:07:04.178 -R, --huge-unlink unlink huge files after initialization 00:07:04.178 -n, --mem-channels number of memory channels used for DPDK 00:07:04.178 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:07:04.179 --msg-mempool-size global message memory pool size in count (default: 262143) 00:07:04.179 --no-huge run without using hugepages 00:07:04.179 -i, --shm-id shared memory ID (optional) 00:07:04.179 -g, --single-file-segments force creating just one hugetlbfs file 00:07:04.179 00:07:04.179 PCI options: 00:07:04.179 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:07:04.179 -B, --pci-blocked pci addr to block (can be used more than once) 00:07:04.179 -u, --no-pci disable PCI access 00:07:04.179 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:07:04.179 00:07:04.179 Log options: 00:07:04.179 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:07:04.179 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:07:04.179 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 00:07:04.179 blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 00:07:04.179 blobfs_rw, ftl_core, ftl_init, gpt_parse, idxd, ioat, iscsi_init, 00:07:04.179 json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, nvme, 00:07:04.179 nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, sock, sock_posix, 00:07:04.179 thread, trace, uring, vbdev_delay, vbdev_gpt, vbdev_lvol, vbdev_opal, 00:07:04.179 vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, vfio_user, 00:07:04.179 virtio, virtio_blk, virtio_dev, virtio_pci, virtio_user, 00:07:04.179 virtio_vfio_user, vmd) 00:07:04.179 --silence-noticelog disable notice level logging to stderr 00:07:04.179 00:07:04.179 Trace options: 00:07:04.179 --num-trace-entries number of trace entries for each core, must be power of 2, 00:07:04.179 setting 0 to disable trace (default 32768) 00:07:04.179 Tracepoints vary in size and can use more than one trace entry. 00:07:04.179 -e, --tpoint-group [:] 00:07:04.179 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, 00:07:04.179 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:07:04.179 [2024-07-24 21:28:48.908225] spdk_dd.c:1480:main: *ERROR*: Invalid arguments 00:07:04.179 blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, all). 00:07:04.179 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:07:04.179 a tracepoint group. First tpoint inside a group can be enabled by 00:07:04.179 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:07:04.179 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:07:04.179 in /include/spdk_internal/trace_defs.h 00:07:04.179 00:07:04.179 Other options: 00:07:04.179 -h, --help show this usage 00:07:04.179 -v, --version print SPDK version 00:07:04.179 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:07:04.179 --env-context Opaque context for use of the env implementation 00:07:04.179 00:07:04.179 Application specific: 00:07:04.179 [--------- DD Options ---------] 00:07:04.179 --if Input file. Must specify either --if or --ib. 00:07:04.179 --ib Input bdev. Must specifier either --if or --ib 00:07:04.179 --of Output file. Must specify either --of or --ob. 00:07:04.179 --ob Output bdev. Must specify either --of or --ob. 00:07:04.179 --iflag Input file flags. 00:07:04.179 --oflag Output file flags. 00:07:04.179 --bs I/O unit size (default: 4096) 00:07:04.179 --qd Queue depth (default: 2) 00:07:04.179 --count I/O unit count. The number of I/O units to copy. (default: all) 00:07:04.179 --skip Skip this many I/O units at start of input. (default: 0) 00:07:04.179 --seek Skip this many I/O units at start of output. (default: 0) 00:07:04.179 --aio Force usage of AIO. (by default io_uring is used if available) 00:07:04.179 --sparse Enable hole skipping in input target 00:07:04.179 Available iflag and oflag values: 00:07:04.179 append - append mode 00:07:04.179 direct - use direct I/O for data 00:07:04.179 directory - fail unless a directory 00:07:04.179 dsync - use synchronized I/O for data 00:07:04.179 noatime - do not update access time 00:07:04.179 noctty - do not assign controlling terminal from file 00:07:04.179 nofollow - do not follow symlinks 00:07:04.179 nonblock - use non-blocking I/O 00:07:04.179 sync - use synchronized I/O for data and metadata 00:07:04.179 21:28:48 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@653 -- # es=2 00:07:04.179 21:28:48 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:04.179 21:28:48 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:04.179 21:28:48 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:04.179 00:07:04.179 real 0m0.080s 00:07:04.179 user 0m0.043s 00:07:04.179 sys 0m0.034s 00:07:04.179 21:28:48 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:04.179 21:28:48 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x 00:07:04.179 ************************************ 00:07:04.179 END TEST dd_invalid_arguments 00:07:04.179 ************************************ 00:07:04.179 21:28:48 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@108 -- # run_test dd_double_input double_input 00:07:04.179 21:28:48 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:04.179 21:28:48 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:04.179 21:28:48 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:04.179 ************************************ 00:07:04.179 START TEST dd_double_input 00:07:04.179 ************************************ 00:07:04.179 21:28:48 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1125 -- # double_input 00:07:04.179 21:28:48 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:07:04.179 21:28:48 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@650 -- # local es=0 00:07:04.179 21:28:48 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:07:04.179 21:28:48 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:04.179 21:28:48 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:04.179 21:28:48 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:04.179 21:28:48 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:04.179 21:28:48 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:04.179 21:28:48 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:04.179 21:28:48 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:04.179 21:28:48 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:04.179 21:28:48 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:07:04.179 [2024-07-24 21:28:49.043713] spdk_dd.c:1487:main: *ERROR*: You may specify either --if or --ib, but not both. 00:07:04.179 21:28:49 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@653 -- # es=22 00:07:04.179 21:28:49 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:04.179 21:28:49 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:04.179 21:28:49 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:04.179 00:07:04.179 real 0m0.083s 00:07:04.179 user 0m0.043s 00:07:04.179 sys 0m0.036s 00:07:04.179 21:28:49 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:04.179 21:28:49 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:07:04.179 ************************************ 00:07:04.179 END TEST dd_double_input 00:07:04.179 ************************************ 00:07:04.179 21:28:49 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@109 -- # run_test dd_double_output double_output 00:07:04.179 21:28:49 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:04.179 21:28:49 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:04.179 21:28:49 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:04.179 ************************************ 00:07:04.179 START TEST dd_double_output 00:07:04.179 ************************************ 00:07:04.179 21:28:49 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1125 -- # double_output 00:07:04.179 21:28:49 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:07:04.179 21:28:49 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@650 -- # local es=0 00:07:04.179 21:28:49 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:07:04.179 21:28:49 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:04.179 21:28:49 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:04.179 21:28:49 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:04.179 21:28:49 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:04.179 21:28:49 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:04.179 21:28:49 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:04.180 21:28:49 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:04.180 21:28:49 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:04.180 21:28:49 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:07:04.438 [2024-07-24 21:28:49.178833] spdk_dd.c:1493:main: *ERROR*: You may specify either --of or --ob, but not both. 00:07:04.438 21:28:49 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@653 -- # es=22 00:07:04.438 21:28:49 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:04.438 21:28:49 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:04.438 21:28:49 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:04.438 00:07:04.438 real 0m0.080s 00:07:04.438 user 0m0.051s 00:07:04.438 sys 0m0.025s 00:07:04.438 21:28:49 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:04.438 21:28:49 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:07:04.438 ************************************ 00:07:04.438 END TEST dd_double_output 00:07:04.438 ************************************ 00:07:04.438 21:28:49 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@110 -- # run_test dd_no_input no_input 00:07:04.438 21:28:49 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:04.438 21:28:49 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:04.438 21:28:49 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:04.438 ************************************ 00:07:04.438 START TEST dd_no_input 00:07:04.438 ************************************ 00:07:04.438 21:28:49 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1125 -- # no_input 00:07:04.438 21:28:49 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:07:04.438 21:28:49 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@650 -- # local es=0 00:07:04.438 21:28:49 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:07:04.438 21:28:49 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:04.438 21:28:49 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:04.438 21:28:49 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:04.438 21:28:49 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:04.438 21:28:49 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:04.438 21:28:49 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:04.438 21:28:49 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:04.438 21:28:49 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:04.438 21:28:49 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:07:04.438 [2024-07-24 21:28:49.305768] spdk_dd.c:1499:main: *ERROR*: You must specify either --if or --ib 00:07:04.438 21:28:49 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@653 -- # es=22 00:07:04.438 21:28:49 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:04.438 21:28:49 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:04.438 21:28:49 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:04.438 ************************************ 00:07:04.438 END TEST dd_no_input 00:07:04.438 ************************************ 00:07:04.438 00:07:04.438 real 0m0.075s 00:07:04.438 user 0m0.049s 00:07:04.438 sys 0m0.024s 00:07:04.438 21:28:49 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:04.438 21:28:49 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:07:04.439 21:28:49 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@111 -- # run_test dd_no_output no_output 00:07:04.439 21:28:49 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:04.439 21:28:49 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:04.439 21:28:49 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:04.439 ************************************ 00:07:04.439 START TEST dd_no_output 00:07:04.439 ************************************ 00:07:04.439 21:28:49 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1125 -- # no_output 00:07:04.439 21:28:49 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:04.439 21:28:49 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@650 -- # local es=0 00:07:04.439 21:28:49 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:04.439 21:28:49 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:04.439 21:28:49 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:04.439 21:28:49 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:04.439 21:28:49 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:04.439 21:28:49 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:04.439 21:28:49 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:04.439 21:28:49 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:04.439 21:28:49 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:04.439 21:28:49 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:04.439 [2024-07-24 21:28:49.431617] spdk_dd.c:1505:main: *ERROR*: You must specify either --of or --ob 00:07:04.697 ************************************ 00:07:04.697 END TEST dd_no_output 00:07:04.697 ************************************ 00:07:04.697 21:28:49 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@653 -- # es=22 00:07:04.697 21:28:49 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:04.697 21:28:49 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:04.697 21:28:49 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:04.697 00:07:04.697 real 0m0.071s 00:07:04.697 user 0m0.040s 00:07:04.697 sys 0m0.030s 00:07:04.697 21:28:49 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:04.697 21:28:49 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:07:04.697 21:28:49 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@112 -- # run_test dd_wrong_blocksize wrong_blocksize 00:07:04.697 21:28:49 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:04.697 21:28:49 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:04.697 21:28:49 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:04.697 ************************************ 00:07:04.697 START TEST dd_wrong_blocksize 00:07:04.697 ************************************ 00:07:04.697 21:28:49 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1125 -- # wrong_blocksize 00:07:04.697 21:28:49 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:07:04.697 21:28:49 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@650 -- # local es=0 00:07:04.697 21:28:49 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:07:04.697 21:28:49 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:04.697 21:28:49 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:04.697 21:28:49 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:04.697 21:28:49 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:04.697 21:28:49 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:04.697 21:28:49 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:04.697 21:28:49 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:04.697 21:28:49 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:04.697 21:28:49 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:07:04.697 [2024-07-24 21:28:49.559513] spdk_dd.c:1511:main: *ERROR*: Invalid --bs value 00:07:04.697 21:28:49 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@653 -- # es=22 00:07:04.698 21:28:49 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:04.698 21:28:49 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:04.698 ************************************ 00:07:04.698 END TEST dd_wrong_blocksize 00:07:04.698 ************************************ 00:07:04.698 21:28:49 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:04.698 00:07:04.698 real 0m0.078s 00:07:04.698 user 0m0.051s 00:07:04.698 sys 0m0.024s 00:07:04.698 21:28:49 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:04.698 21:28:49 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:07:04.698 21:28:49 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@113 -- # run_test dd_smaller_blocksize smaller_blocksize 00:07:04.698 21:28:49 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:04.698 21:28:49 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:04.698 21:28:49 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:04.698 ************************************ 00:07:04.698 START TEST dd_smaller_blocksize 00:07:04.698 ************************************ 00:07:04.698 21:28:49 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1125 -- # smaller_blocksize 00:07:04.698 21:28:49 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:07:04.698 21:28:49 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@650 -- # local es=0 00:07:04.698 21:28:49 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:07:04.698 21:28:49 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:04.698 21:28:49 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:04.698 21:28:49 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:04.698 21:28:49 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:04.698 21:28:49 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:04.698 21:28:49 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:04.698 21:28:49 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:04.698 21:28:49 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:04.698 21:28:49 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:07:04.698 [2024-07-24 21:28:49.690093] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:07:04.698 [2024-07-24 21:28:49.690158] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63787 ] 00:07:04.956 [2024-07-24 21:28:49.824162] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.956 [2024-07-24 21:28:49.951006] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.214 [2024-07-24 21:28:50.027007] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:05.473 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:07:05.473 [2024-07-24 21:28:50.383366] spdk_dd.c:1184:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:07:05.473 [2024-07-24 21:28:50.383427] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:05.731 [2024-07-24 21:28:50.560001] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:05.731 21:28:50 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@653 -- # es=244 00:07:05.731 21:28:50 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:05.731 21:28:50 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@662 -- # es=116 00:07:05.731 21:28:50 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@663 -- # case "$es" in 00:07:05.731 21:28:50 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@670 -- # es=1 00:07:05.731 21:28:50 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:05.731 00:07:05.731 real 0m1.055s 00:07:05.731 user 0m0.489s 00:07:05.731 sys 0m0.458s 00:07:05.731 21:28:50 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:05.731 21:28:50 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:07:05.731 ************************************ 00:07:05.731 END TEST dd_smaller_blocksize 00:07:05.731 ************************************ 00:07:05.990 21:28:50 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@114 -- # run_test dd_invalid_count invalid_count 00:07:05.990 21:28:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:05.990 21:28:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:05.990 21:28:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:05.990 ************************************ 00:07:05.990 START TEST dd_invalid_count 00:07:05.990 ************************************ 00:07:05.990 21:28:50 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1125 -- # invalid_count 00:07:05.990 21:28:50 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:07:05.990 21:28:50 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@650 -- # local es=0 00:07:05.990 21:28:50 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:07:05.990 21:28:50 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:05.990 21:28:50 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:05.990 21:28:50 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:05.990 21:28:50 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:05.990 21:28:50 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:05.990 21:28:50 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:05.990 21:28:50 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:05.990 21:28:50 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:05.990 21:28:50 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:07:05.990 [2024-07-24 21:28:50.805271] spdk_dd.c:1517:main: *ERROR*: Invalid --count value 00:07:05.990 21:28:50 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@653 -- # es=22 00:07:05.990 21:28:50 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:05.990 21:28:50 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:05.990 21:28:50 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:05.990 00:07:05.990 real 0m0.071s 00:07:05.990 user 0m0.046s 00:07:05.990 sys 0m0.024s 00:07:05.990 21:28:50 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:05.990 ************************************ 00:07:05.990 END TEST dd_invalid_count 00:07:05.990 ************************************ 00:07:05.990 21:28:50 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:07:05.990 21:28:50 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@115 -- # run_test dd_invalid_oflag invalid_oflag 00:07:05.990 21:28:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:05.990 21:28:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:05.990 21:28:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:05.990 ************************************ 00:07:05.990 START TEST dd_invalid_oflag 00:07:05.990 ************************************ 00:07:05.990 21:28:50 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1125 -- # invalid_oflag 00:07:05.990 21:28:50 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:07:05.990 21:28:50 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@650 -- # local es=0 00:07:05.990 21:28:50 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:07:05.990 21:28:50 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:05.990 21:28:50 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:05.990 21:28:50 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:05.990 21:28:50 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:05.990 21:28:50 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:05.990 21:28:50 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:05.990 21:28:50 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:05.990 21:28:50 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:05.990 21:28:50 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:07:05.990 [2024-07-24 21:28:50.930820] spdk_dd.c:1523:main: *ERROR*: --oflags may be used only with --of 00:07:05.990 21:28:50 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@653 -- # es=22 00:07:05.990 21:28:50 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:05.990 21:28:50 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:05.990 ************************************ 00:07:05.990 END TEST dd_invalid_oflag 00:07:05.990 ************************************ 00:07:05.990 21:28:50 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:05.990 00:07:05.990 real 0m0.077s 00:07:05.990 user 0m0.043s 00:07:05.990 sys 0m0.032s 00:07:05.990 21:28:50 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:05.990 21:28:50 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:07:06.249 21:28:50 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@116 -- # run_test dd_invalid_iflag invalid_iflag 00:07:06.249 21:28:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:06.249 21:28:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:06.249 21:28:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:06.249 ************************************ 00:07:06.249 START TEST dd_invalid_iflag 00:07:06.249 ************************************ 00:07:06.249 21:28:51 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1125 -- # invalid_iflag 00:07:06.249 21:28:51 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:07:06.249 21:28:51 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@650 -- # local es=0 00:07:06.249 21:28:51 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:07:06.249 21:28:51 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:06.249 21:28:51 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:06.249 21:28:51 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:06.249 21:28:51 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:06.249 21:28:51 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:06.249 21:28:51 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:06.249 21:28:51 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:06.249 21:28:51 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:06.249 21:28:51 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:07:06.249 [2024-07-24 21:28:51.062814] spdk_dd.c:1529:main: *ERROR*: --iflags may be used only with --if 00:07:06.249 21:28:51 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@653 -- # es=22 00:07:06.249 21:28:51 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:06.249 21:28:51 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:06.249 21:28:51 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:06.249 00:07:06.249 real 0m0.076s 00:07:06.249 user 0m0.046s 00:07:06.249 sys 0m0.029s 00:07:06.249 21:28:51 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:06.249 21:28:51 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:07:06.249 ************************************ 00:07:06.249 END TEST dd_invalid_iflag 00:07:06.249 ************************************ 00:07:06.249 21:28:51 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@117 -- # run_test dd_unknown_flag unknown_flag 00:07:06.249 21:28:51 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:06.249 21:28:51 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:06.249 21:28:51 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:06.249 ************************************ 00:07:06.249 START TEST dd_unknown_flag 00:07:06.249 ************************************ 00:07:06.249 21:28:51 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1125 -- # unknown_flag 00:07:06.249 21:28:51 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:07:06.249 21:28:51 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@650 -- # local es=0 00:07:06.249 21:28:51 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:07:06.249 21:28:51 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:06.249 21:28:51 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:06.249 21:28:51 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:06.249 21:28:51 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:06.249 21:28:51 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:06.249 21:28:51 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:06.249 21:28:51 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:06.249 21:28:51 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:06.249 21:28:51 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:07:06.249 [2024-07-24 21:28:51.191937] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:07:06.249 [2024-07-24 21:28:51.192021] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63885 ] 00:07:06.507 [2024-07-24 21:28:51.332002] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.507 [2024-07-24 21:28:51.454159] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.765 [2024-07-24 21:28:51.535633] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:06.765 [2024-07-24 21:28:51.582240] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:07:06.765 [2024-07-24 21:28:51.582543] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:06.765 [2024-07-24 21:28:51.582648] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:07:06.765 [2024-07-24 21:28:51.582666] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:06.765 [2024-07-24 21:28:51.582970] spdk_dd.c:1218:dd_run: *ERROR*: Failed to register files with io_uring: -9 (Bad file descriptor) 00:07:06.765 [2024-07-24 21:28:51.582987] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:06.766 [2024-07-24 21:28:51.583046] app.c:1040:app_stop: *NOTICE*: spdk_app_stop called twice 00:07:06.766 [2024-07-24 21:28:51.583057] app.c:1040:app_stop: *NOTICE*: spdk_app_stop called twice 00:07:06.766 [2024-07-24 21:28:51.750560] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:07.024 21:28:51 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@653 -- # es=234 00:07:07.024 21:28:51 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:07.024 21:28:51 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@662 -- # es=106 00:07:07.024 ************************************ 00:07:07.024 END TEST dd_unknown_flag 00:07:07.024 ************************************ 00:07:07.024 21:28:51 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@663 -- # case "$es" in 00:07:07.024 21:28:51 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@670 -- # es=1 00:07:07.024 21:28:51 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:07.024 00:07:07.024 real 0m0.784s 00:07:07.024 user 0m0.472s 00:07:07.024 sys 0m0.212s 00:07:07.024 21:28:51 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:07.024 21:28:51 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:07:07.024 21:28:51 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@118 -- # run_test dd_invalid_json invalid_json 00:07:07.024 21:28:51 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:07.024 21:28:51 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:07.024 21:28:51 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:07.024 ************************************ 00:07:07.024 START TEST dd_invalid_json 00:07:07.024 ************************************ 00:07:07.024 21:28:51 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1125 -- # invalid_json 00:07:07.024 21:28:51 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@95 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:07:07.024 21:28:51 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@650 -- # local es=0 00:07:07.024 21:28:51 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:07:07.024 21:28:51 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:07.024 21:28:51 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@95 -- # : 00:07:07.024 21:28:51 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:07.024 21:28:51 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:07.024 21:28:51 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:07.024 21:28:51 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:07.024 21:28:51 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:07.024 21:28:51 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:07.024 21:28:51 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:07.024 21:28:51 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:07:07.283 [2024-07-24 21:28:52.042504] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:07:07.283 [2024-07-24 21:28:52.042610] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63919 ] 00:07:07.283 [2024-07-24 21:28:52.183913] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.541 [2024-07-24 21:28:52.309768] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.541 [2024-07-24 21:28:52.309906] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:07:07.541 [2024-07-24 21:28:52.309923] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:07.541 [2024-07-24 21:28:52.309933] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:07.541 [2024-07-24 21:28:52.309974] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:07.541 21:28:52 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@653 -- # es=234 00:07:07.541 21:28:52 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:07.541 21:28:52 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@662 -- # es=106 00:07:07.541 21:28:52 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@663 -- # case "$es" in 00:07:07.541 21:28:52 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@670 -- # es=1 00:07:07.541 ************************************ 00:07:07.541 END TEST dd_invalid_json 00:07:07.541 ************************************ 00:07:07.541 21:28:52 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:07.541 00:07:07.541 real 0m0.482s 00:07:07.541 user 0m0.279s 00:07:07.541 sys 0m0.101s 00:07:07.541 21:28:52 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:07.541 21:28:52 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:07:07.541 ************************************ 00:07:07.541 END TEST spdk_dd_negative 00:07:07.541 ************************************ 00:07:07.541 00:07:07.541 real 0m3.763s 00:07:07.541 user 0m1.882s 00:07:07.541 sys 0m1.488s 00:07:07.541 21:28:52 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:07.541 21:28:52 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:07.801 00:07:07.801 real 1m29.765s 00:07:07.801 user 0m58.273s 00:07:07.801 sys 0m40.176s 00:07:07.801 21:28:52 spdk_dd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:07.801 ************************************ 00:07:07.801 END TEST spdk_dd 00:07:07.801 ************************************ 00:07:07.801 21:28:52 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:07.801 21:28:52 -- spdk/autotest.sh@215 -- # '[' 0 -eq 1 ']' 00:07:07.801 21:28:52 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:07:07.801 21:28:52 -- spdk/autotest.sh@264 -- # timing_exit lib 00:07:07.801 21:28:52 -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:07.801 21:28:52 -- common/autotest_common.sh@10 -- # set +x 00:07:07.801 21:28:52 -- spdk/autotest.sh@266 -- # '[' 0 -eq 1 ']' 00:07:07.801 21:28:52 -- spdk/autotest.sh@274 -- # '[' 0 -eq 1 ']' 00:07:07.801 21:28:52 -- spdk/autotest.sh@283 -- # '[' 1 -eq 1 ']' 00:07:07.801 21:28:52 -- spdk/autotest.sh@284 -- # export NET_TYPE 00:07:07.801 21:28:52 -- spdk/autotest.sh@287 -- # '[' tcp = rdma ']' 00:07:07.801 21:28:52 -- spdk/autotest.sh@290 -- # '[' tcp = tcp ']' 00:07:07.801 21:28:52 -- spdk/autotest.sh@291 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:07.801 21:28:52 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:07.801 21:28:52 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:07.801 21:28:52 -- common/autotest_common.sh@10 -- # set +x 00:07:07.801 ************************************ 00:07:07.801 START TEST nvmf_tcp 00:07:07.801 ************************************ 00:07:07.801 21:28:52 nvmf_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:07.801 * Looking for test storage... 00:07:07.801 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:07:07.801 21:28:52 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:07.801 21:28:52 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:07.801 21:28:52 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:07.801 21:28:52 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:07.801 21:28:52 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:07.801 21:28:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:07.801 ************************************ 00:07:07.801 START TEST nvmf_target_core 00:07:07.801 ************************************ 00:07:07.801 21:28:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:08.061 * Looking for test storage... 00:07:08.061 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:07:08.061 21:28:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:07:08.061 21:28:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:08.061 21:28:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:08.061 21:28:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:07:08.061 21:28:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:08.061 21:28:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:08.061 21:28:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:08.061 21:28:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:08.061 21:28:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:08.061 21:28:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:08.061 21:28:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:08.061 21:28:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:08.061 21:28:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:08.061 21:28:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:08.061 21:28:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:07:08.061 21:28:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:07:08.061 21:28:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:08.061 21:28:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:08.061 21:28:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:08.061 21:28:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:08.061 21:28:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:08.061 21:28:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:08.061 21:28:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:08.061 21:28:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:08.061 21:28:52 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.061 21:28:52 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.061 21:28:52 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.061 21:28:52 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:07:08.061 21:28:52 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.061 21:28:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@47 -- # : 0 00:07:08.061 21:28:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:08.061 21:28:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:08.061 21:28:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:08.061 21:28:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:08.061 21:28:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:08.061 21:28:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:08.061 21:28:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:08.061 21:28:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:08.061 21:28:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:08.061 21:28:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:07:08.061 21:28:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 1 -eq 0 ]] 00:07:08.061 21:28:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:08.061 21:28:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:08.061 21:28:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:08.061 21:28:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:08.061 ************************************ 00:07:08.061 START TEST nvmf_host_management 00:07:08.061 ************************************ 00:07:08.061 21:28:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:08.061 * Looking for test storage... 00:07:08.061 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:08.061 21:28:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:08.061 21:28:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:08.061 21:28:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:08.061 21:28:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:08.061 21:28:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:08.061 21:28:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:08.061 21:28:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:08.061 21:28:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:08.061 21:28:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:08.061 21:28:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:08.061 21:28:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:08.061 21:28:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:08.061 21:28:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:07:08.061 21:28:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:07:08.061 21:28:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:08.061 21:28:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:08.061 21:28:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:08.061 21:28:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:08.061 21:28:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:08.061 21:28:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:08.061 21:28:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:08.061 21:28:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:08.061 21:28:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.061 21:28:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.062 21:28:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.062 21:28:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:08.062 21:28:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.062 21:28:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:07:08.062 21:28:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:08.062 21:28:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:08.062 21:28:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:08.062 21:28:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:08.062 21:28:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:08.062 21:28:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:08.062 21:28:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:08.062 21:28:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:08.062 21:28:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:08.062 21:28:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:08.062 21:28:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:08.062 21:28:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:08.062 21:28:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:08.062 21:28:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:08.062 21:28:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:08.062 21:28:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:08.062 21:28:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:08.062 21:28:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:08.062 21:28:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:08.062 21:28:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:07:08.062 21:28:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:07:08.062 21:28:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:07:08.062 21:28:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:07:08.062 21:28:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:07:08.062 21:28:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # nvmf_veth_init 00:07:08.062 21:28:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:08.062 21:28:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:08.062 21:28:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:08.062 21:28:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:08.062 21:28:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:08.062 21:28:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:08.062 21:28:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:08.062 21:28:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:08.062 21:28:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:08.062 21:28:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:08.062 21:28:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:08.062 21:28:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:08.062 21:28:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:08.062 Cannot find device "nvmf_init_br" 00:07:08.062 21:28:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # true 00:07:08.062 21:28:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:08.062 Cannot find device "nvmf_tgt_br" 00:07:08.062 21:28:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@155 -- # true 00:07:08.062 21:28:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:08.062 Cannot find device "nvmf_tgt_br2" 00:07:08.062 21:28:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # true 00:07:08.062 21:28:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:08.062 Cannot find device "nvmf_init_br" 00:07:08.062 21:28:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # true 00:07:08.062 21:28:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:08.321 Cannot find device "nvmf_tgt_br" 00:07:08.321 21:28:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # true 00:07:08.321 21:28:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:08.321 Cannot find device "nvmf_tgt_br2" 00:07:08.321 21:28:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # true 00:07:08.321 21:28:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:08.321 Cannot find device "nvmf_br" 00:07:08.321 21:28:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # true 00:07:08.321 21:28:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:08.321 Cannot find device "nvmf_init_if" 00:07:08.321 21:28:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@161 -- # true 00:07:08.321 21:28:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:08.321 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:08.321 21:28:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:07:08.321 21:28:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:08.321 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:08.321 21:28:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:07:08.321 21:28:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:08.321 21:28:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:08.321 21:28:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:08.321 21:28:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:08.321 21:28:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:08.321 21:28:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:08.321 21:28:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:08.321 21:28:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:08.321 21:28:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:08.321 21:28:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:08.321 21:28:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:08.321 21:28:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:08.321 21:28:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:08.321 21:28:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:08.321 21:28:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:08.321 21:28:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:08.321 21:28:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:08.321 21:28:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:08.321 21:28:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:08.321 21:28:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:08.580 21:28:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:08.580 21:28:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:08.580 21:28:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:08.580 21:28:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:08.580 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:08.580 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.172 ms 00:07:08.580 00:07:08.580 --- 10.0.0.2 ping statistics --- 00:07:08.580 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:08.580 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:07:08.580 21:28:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:08.580 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:08.580 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.076 ms 00:07:08.580 00:07:08.580 --- 10.0.0.3 ping statistics --- 00:07:08.580 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:08.580 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:07:08.580 21:28:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:08.580 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:08.580 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.053 ms 00:07:08.580 00:07:08.580 --- 10.0.0.1 ping statistics --- 00:07:08.580 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:08.580 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:07:08.580 21:28:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:08.580 21:28:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@433 -- # return 0 00:07:08.580 21:28:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:08.580 21:28:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:08.580 21:28:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:08.580 21:28:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:08.580 21:28:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:08.580 21:28:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:08.580 21:28:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:08.580 21:28:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:08.580 21:28:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:08.580 21:28:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:08.580 21:28:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:08.580 21:28:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:08.580 21:28:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:08.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:08.580 21:28:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=64195 00:07:08.580 21:28:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 64195 00:07:08.580 21:28:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 64195 ']' 00:07:08.580 21:28:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:08.580 21:28:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:08.580 21:28:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:08.580 21:28:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:08.580 21:28:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:08.580 21:28:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:08.580 [2024-07-24 21:28:53.470754] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:07:08.580 [2024-07-24 21:28:53.470925] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:08.838 [2024-07-24 21:28:53.615260] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:08.838 [2024-07-24 21:28:53.753065] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:08.838 [2024-07-24 21:28:53.753141] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:08.838 [2024-07-24 21:28:53.753156] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:08.838 [2024-07-24 21:28:53.753167] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:08.838 [2024-07-24 21:28:53.753177] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:08.838 [2024-07-24 21:28:53.753358] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:08.838 [2024-07-24 21:28:53.753513] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:08.838 [2024-07-24 21:28:53.753745] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:08.838 [2024-07-24 21:28:53.753747] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:07:08.838 [2024-07-24 21:28:53.829478] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:09.773 21:28:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:09.773 21:28:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:07:09.773 21:28:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:09.774 21:28:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:09.774 21:28:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:09.774 21:28:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:09.774 21:28:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:09.774 21:28:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.774 21:28:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:09.774 [2024-07-24 21:28:54.532483] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:09.774 21:28:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.774 21:28:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:09.774 21:28:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:09.774 21:28:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:09.774 21:28:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:07:09.774 21:28:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:09.774 21:28:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:09.774 21:28:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.774 21:28:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:09.774 Malloc0 00:07:09.774 [2024-07-24 21:28:54.621785] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:09.774 21:28:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.774 21:28:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:09.774 21:28:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:09.774 21:28:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:09.774 21:28:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=64262 00:07:09.774 21:28:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 64262 /var/tmp/bdevperf.sock 00:07:09.774 21:28:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 64262 ']' 00:07:09.774 21:28:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:09.774 21:28:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:09.774 21:28:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:09.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:09.774 21:28:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:09.774 21:28:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:09.774 21:28:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:09.774 21:28:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:09.774 21:28:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:07:09.774 21:28:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:07:09.774 21:28:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:07:09.774 21:28:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:07:09.774 { 00:07:09.774 "params": { 00:07:09.774 "name": "Nvme$subsystem", 00:07:09.774 "trtype": "$TEST_TRANSPORT", 00:07:09.774 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:09.774 "adrfam": "ipv4", 00:07:09.774 "trsvcid": "$NVMF_PORT", 00:07:09.774 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:09.774 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:09.774 "hdgst": ${hdgst:-false}, 00:07:09.774 "ddgst": ${ddgst:-false} 00:07:09.774 }, 00:07:09.774 "method": "bdev_nvme_attach_controller" 00:07:09.774 } 00:07:09.774 EOF 00:07:09.774 )") 00:07:09.774 21:28:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:07:09.774 21:28:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:07:09.774 21:28:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:07:09.774 21:28:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:07:09.774 "params": { 00:07:09.774 "name": "Nvme0", 00:07:09.774 "trtype": "tcp", 00:07:09.774 "traddr": "10.0.0.2", 00:07:09.774 "adrfam": "ipv4", 00:07:09.774 "trsvcid": "4420", 00:07:09.774 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:09.774 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:09.774 "hdgst": false, 00:07:09.774 "ddgst": false 00:07:09.774 }, 00:07:09.774 "method": "bdev_nvme_attach_controller" 00:07:09.774 }' 00:07:09.774 [2024-07-24 21:28:54.730161] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:07:09.774 [2024-07-24 21:28:54.730244] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64262 ] 00:07:10.032 [2024-07-24 21:28:54.871849] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.032 [2024-07-24 21:28:55.012823] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.290 [2024-07-24 21:28:55.098538] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:10.290 Running I/O for 10 seconds... 00:07:10.857 21:28:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:10.857 21:28:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:07:10.857 21:28:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:10.857 21:28:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.857 21:28:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:10.857 21:28:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.857 21:28:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:10.857 21:28:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:10.857 21:28:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:10.857 21:28:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:10.857 21:28:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:10.857 21:28:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:10.857 21:28:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:10.857 21:28:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:10.857 21:28:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:10.857 21:28:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:10.857 21:28:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.857 21:28:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:10.857 21:28:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.857 21:28:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=707 00:07:10.857 21:28:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 707 -ge 100 ']' 00:07:10.857 21:28:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:10.857 21:28:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:10.857 21:28:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:10.857 21:28:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:10.857 21:28:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.857 21:28:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:10.857 [2024-07-24 21:28:55.815824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.857 [2024-07-24 21:28:55.815890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.857 [2024-07-24 21:28:55.815914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.857 [2024-07-24 21:28:55.815924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.857 [2024-07-24 21:28:55.815937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.857 [2024-07-24 21:28:55.815946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.857 [2024-07-24 21:28:55.815957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.858 [2024-07-24 21:28:55.815967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.858 [2024-07-24 21:28:55.815978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.858 [2024-07-24 21:28:55.815987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.858 [2024-07-24 21:28:55.815997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.858 [2024-07-24 21:28:55.816007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.858 [2024-07-24 21:28:55.816017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.858 [2024-07-24 21:28:55.816026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.858 [2024-07-24 21:28:55.816037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.858 [2024-07-24 21:28:55.816056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.858 [2024-07-24 21:28:55.816067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.858 [2024-07-24 21:28:55.816093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.858 [2024-07-24 21:28:55.816104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.858 [2024-07-24 21:28:55.816114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.858 [2024-07-24 21:28:55.816125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.858 [2024-07-24 21:28:55.816134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.858 [2024-07-24 21:28:55.816146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.858 [2024-07-24 21:28:55.816155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.858 [2024-07-24 21:28:55.816166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.858 [2024-07-24 21:28:55.816175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.858 [2024-07-24 21:28:55.816186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.858 [2024-07-24 21:28:55.816195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.858 [2024-07-24 21:28:55.816206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.858 [2024-07-24 21:28:55.816215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.858 [2024-07-24 21:28:55.816226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.858 [2024-07-24 21:28:55.816235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.858 [2024-07-24 21:28:55.816246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.858 [2024-07-24 21:28:55.816256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.858 [2024-07-24 21:28:55.816268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.858 [2024-07-24 21:28:55.816277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.858 [2024-07-24 21:28:55.816288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.858 [2024-07-24 21:28:55.816298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.858 [2024-07-24 21:28:55.816309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.858 [2024-07-24 21:28:55.816318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.858 [2024-07-24 21:28:55.816329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.858 [2024-07-24 21:28:55.816338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.858 [2024-07-24 21:28:55.816349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.858 [2024-07-24 21:28:55.816358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.858 [2024-07-24 21:28:55.816369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.858 [2024-07-24 21:28:55.816378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.858 [2024-07-24 21:28:55.816389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.858 [2024-07-24 21:28:55.816405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.858 [2024-07-24 21:28:55.816417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.858 [2024-07-24 21:28:55.816426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.858 [2024-07-24 21:28:55.816438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.858 [2024-07-24 21:28:55.816447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.858 [2024-07-24 21:28:55.816458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.858 [2024-07-24 21:28:55.816468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.858 [2024-07-24 21:28:55.816479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.858 [2024-07-24 21:28:55.816488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.858 [2024-07-24 21:28:55.816500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.858 [2024-07-24 21:28:55.816508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.858 [2024-07-24 21:28:55.816519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.858 [2024-07-24 21:28:55.816529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.858 [2024-07-24 21:28:55.816540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.858 [2024-07-24 21:28:55.816549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.858 [2024-07-24 21:28:55.816560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.858 [2024-07-24 21:28:55.816569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.858 [2024-07-24 21:28:55.816580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.858 [2024-07-24 21:28:55.816601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.858 [2024-07-24 21:28:55.816614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.858 [2024-07-24 21:28:55.816637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.858 [2024-07-24 21:28:55.816650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.858 [2024-07-24 21:28:55.816660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.858 [2024-07-24 21:28:55.816672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.858 [2024-07-24 21:28:55.816681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.858 [2024-07-24 21:28:55.816693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.858 [2024-07-24 21:28:55.816702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.858 [2024-07-24 21:28:55.816713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.858 [2024-07-24 21:28:55.816722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.858 [2024-07-24 21:28:55.816734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.858 [2024-07-24 21:28:55.816743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.858 [2024-07-24 21:28:55.816754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.858 [2024-07-24 21:28:55.816769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.858 [2024-07-24 21:28:55.816780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.858 [2024-07-24 21:28:55.816799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.858 [2024-07-24 21:28:55.816810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.858 [2024-07-24 21:28:55.816820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.858 [2024-07-24 21:28:55.816831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.859 [2024-07-24 21:28:55.816840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.859 [2024-07-24 21:28:55.816851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.859 [2024-07-24 21:28:55.816860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.859 [2024-07-24 21:28:55.816871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.859 [2024-07-24 21:28:55.816880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.859 [2024-07-24 21:28:55.816891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.859 [2024-07-24 21:28:55.816900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.859 [2024-07-24 21:28:55.816911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.859 [2024-07-24 21:28:55.816920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.859 [2024-07-24 21:28:55.816931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.859 [2024-07-24 21:28:55.816940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.859 [2024-07-24 21:28:55.816951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.859 [2024-07-24 21:28:55.816961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.859 [2024-07-24 21:28:55.816972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.859 [2024-07-24 21:28:55.816981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.859 [2024-07-24 21:28:55.816992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.859 [2024-07-24 21:28:55.817001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.859 [2024-07-24 21:28:55.817012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.859 [2024-07-24 21:28:55.817022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.859 [2024-07-24 21:28:55.817033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.859 [2024-07-24 21:28:55.817042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.859 [2024-07-24 21:28:55.817053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.859 [2024-07-24 21:28:55.817062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.859 [2024-07-24 21:28:55.817072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.859 [2024-07-24 21:28:55.817082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.859 [2024-07-24 21:28:55.817093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.859 [2024-07-24 21:28:55.817107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.859 [2024-07-24 21:28:55.817118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.859 [2024-07-24 21:28:55.817127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.859 [2024-07-24 21:28:55.817138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.859 [2024-07-24 21:28:55.817147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.859 [2024-07-24 21:28:55.817158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.859 [2024-07-24 21:28:55.817167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.859 [2024-07-24 21:28:55.817178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.859 [2024-07-24 21:28:55.817187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.859 [2024-07-24 21:28:55.817198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.859 [2024-07-24 21:28:55.817207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.859 [2024-07-24 21:28:55.817218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.859 [2024-07-24 21:28:55.817227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.859 [2024-07-24 21:28:55.817239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.859 [2024-07-24 21:28:55.817248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.859 [2024-07-24 21:28:55.817259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.859 [2024-07-24 21:28:55.817268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.859 [2024-07-24 21:28:55.817411] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1619ec0 was disconnected and freed. reset controller. 00:07:10.859 task offset: 102528 on job bdev=Nvme0n1 fails 00:07:10.859 00:07:10.859 Latency(us) 00:07:10.859 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:10.859 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:10.859 Job: Nvme0n1 ended in about 0.59 seconds with error 00:07:10.859 Verification LBA range: start 0x0 length 0x400 00:07:10.859 Nvme0n1 : 0.59 1297.80 81.11 108.15 0.00 44285.03 2159.71 43134.60 00:07:10.859 =================================================================================================================== 00:07:10.859 Total : 1297.80 81.11 108.15 0.00 44285.03 2159.71 43134.60 00:07:10.859 [2024-07-24 21:28:55.818582] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:07:10.859 [2024-07-24 21:28:55.820635] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:10.859 [2024-07-24 21:28:55.820661] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1611d50 (9): Bad file descriptor 00:07:10.859 [2024-07-24 21:28:55.822569] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:07:10.859 [2024-07-24 21:28:55.822726] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:07:10.859 [2024-07-24 21:28:55.822763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.859 [2024-07-24 21:28:55.822784] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:07:10.859 [2024-07-24 21:28:55.822795] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:07:10.859 [2024-07-24 21:28:55.822805] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:07:10.859 [2024-07-24 21:28:55.822814] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1611d50 00:07:10.859 [2024-07-24 21:28:55.822850] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1611d50 (9): Bad file descriptor 00:07:10.859 [2024-07-24 21:28:55.822869] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:07:10.859 [2024-07-24 21:28:55.822879] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:07:10.859 [2024-07-24 21:28:55.822891] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:07:10.859 [2024-07-24 21:28:55.822915] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:10.859 21:28:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.859 21:28:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:10.859 21:28:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.859 21:28:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:10.859 21:28:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.859 21:28:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:12.234 21:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 64262 00:07:12.234 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (64262) - No such process 00:07:12.234 21:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:12.234 21:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:12.234 21:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:12.234 21:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:12.234 21:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:07:12.234 21:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:07:12.234 21:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:07:12.234 21:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:07:12.234 { 00:07:12.234 "params": { 00:07:12.234 "name": "Nvme$subsystem", 00:07:12.234 "trtype": "$TEST_TRANSPORT", 00:07:12.234 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:12.234 "adrfam": "ipv4", 00:07:12.234 "trsvcid": "$NVMF_PORT", 00:07:12.234 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:12.234 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:12.234 "hdgst": ${hdgst:-false}, 00:07:12.234 "ddgst": ${ddgst:-false} 00:07:12.234 }, 00:07:12.234 "method": "bdev_nvme_attach_controller" 00:07:12.234 } 00:07:12.235 EOF 00:07:12.235 )") 00:07:12.235 21:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:07:12.235 21:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:07:12.235 21:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:07:12.235 21:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:07:12.235 "params": { 00:07:12.235 "name": "Nvme0", 00:07:12.235 "trtype": "tcp", 00:07:12.235 "traddr": "10.0.0.2", 00:07:12.235 "adrfam": "ipv4", 00:07:12.235 "trsvcid": "4420", 00:07:12.235 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:12.235 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:12.235 "hdgst": false, 00:07:12.235 "ddgst": false 00:07:12.235 }, 00:07:12.235 "method": "bdev_nvme_attach_controller" 00:07:12.235 }' 00:07:12.235 [2024-07-24 21:28:56.898104] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:07:12.235 [2024-07-24 21:28:56.898185] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64300 ] 00:07:12.235 [2024-07-24 21:28:57.033209] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.235 [2024-07-24 21:28:57.167756] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.492 [2024-07-24 21:28:57.250712] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:12.492 Running I/O for 1 seconds... 00:07:13.426 00:07:13.426 Latency(us) 00:07:13.426 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:13.426 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:13.426 Verification LBA range: start 0x0 length 0x400 00:07:13.426 Nvme0n1 : 1.04 1414.49 88.41 0.00 0.00 44391.32 5183.30 41704.73 00:07:13.426 =================================================================================================================== 00:07:13.426 Total : 1414.49 88.41 0.00 0.00 44391.32 5183.30 41704.73 00:07:13.992 21:28:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:13.992 21:28:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:13.992 21:28:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:07:13.992 21:28:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:07:13.992 21:28:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:13.992 21:28:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:13.992 21:28:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:07:13.992 21:28:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:13.992 21:28:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:07:13.992 21:28:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:13.992 21:28:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:13.992 rmmod nvme_tcp 00:07:13.992 rmmod nvme_fabrics 00:07:13.992 rmmod nvme_keyring 00:07:13.992 21:28:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:13.992 21:28:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:07:13.992 21:28:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:07:13.992 21:28:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 64195 ']' 00:07:13.992 21:28:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 64195 00:07:13.992 21:28:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 64195 ']' 00:07:13.992 21:28:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 64195 00:07:13.992 21:28:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:07:13.992 21:28:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:13.992 21:28:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64195 00:07:13.992 killing process with pid 64195 00:07:13.992 21:28:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:13.992 21:28:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:13.992 21:28:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64195' 00:07:13.992 21:28:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 64195 00:07:13.992 21:28:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 64195 00:07:14.250 [2024-07-24 21:28:59.233328] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:14.509 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:14.510 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:14.510 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:14.510 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:14.510 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:14.510 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:14.510 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:14.510 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:14.510 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:07:14.510 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:14.510 00:07:14.510 real 0m6.425s 00:07:14.510 user 0m24.738s 00:07:14.510 sys 0m1.779s 00:07:14.510 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:14.510 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:14.510 ************************************ 00:07:14.510 END TEST nvmf_host_management 00:07:14.510 ************************************ 00:07:14.510 21:28:59 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:14.510 21:28:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:14.510 21:28:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:14.510 21:28:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:14.510 ************************************ 00:07:14.510 START TEST nvmf_lvol 00:07:14.510 ************************************ 00:07:14.510 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:14.510 * Looking for test storage... 00:07:14.510 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:14.510 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:14.510 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:14.510 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:14.510 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:14.510 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:14.510 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:14.510 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:14.510 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:14.510 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:14.510 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:14.510 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:14.510 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:14.510 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:07:14.510 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:07:14.510 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:14.510 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:14.510 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:14.510 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:14.510 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:14.510 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:14.510 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:14.510 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:14.510 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.510 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.510 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.510 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:14.510 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.510 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:07:14.510 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:14.510 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:14.510 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:14.510 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:14.510 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:14.510 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:14.510 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:14.510 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:14.510 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:14.510 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:14.510 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:14.510 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:14.510 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:14.510 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:14.510 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:14.510 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:14.510 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:14.510 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:14.510 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:14.510 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:14.510 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:14.510 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:14.510 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:07:14.510 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:07:14.510 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:07:14.510 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:07:14.510 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:07:14.510 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # nvmf_veth_init 00:07:14.510 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:14.510 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:14.510 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:14.510 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:14.510 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:14.510 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:14.510 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:14.510 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:14.510 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:14.510 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:14.510 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:14.510 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:14.510 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:14.510 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:14.511 Cannot find device "nvmf_tgt_br" 00:07:14.511 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@155 -- # true 00:07:14.511 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:14.768 Cannot find device "nvmf_tgt_br2" 00:07:14.768 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # true 00:07:14.768 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:14.768 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:14.768 Cannot find device "nvmf_tgt_br" 00:07:14.768 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # true 00:07:14.769 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:14.769 Cannot find device "nvmf_tgt_br2" 00:07:14.769 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # true 00:07:14.769 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:14.769 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:14.769 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:14.769 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:14.769 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:07:14.769 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:14.769 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:14.769 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:07:14.769 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:14.769 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:14.769 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:14.769 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:14.769 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:14.769 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:14.769 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:14.769 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:14.769 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:14.769 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:15.027 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:15.027 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:15.027 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:15.027 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:15.027 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:15.027 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:15.027 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:15.027 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:15.028 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:15.028 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:15.028 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:15.028 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:15.028 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:15.028 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:15.028 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:15.028 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.100 ms 00:07:15.028 00:07:15.028 --- 10.0.0.2 ping statistics --- 00:07:15.028 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:15.028 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:07:15.028 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:15.028 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:15.028 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:07:15.028 00:07:15.028 --- 10.0.0.3 ping statistics --- 00:07:15.028 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:15.028 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:07:15.028 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:15.028 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:15.028 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.070 ms 00:07:15.028 00:07:15.028 --- 10.0.0.1 ping statistics --- 00:07:15.028 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:15.028 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:07:15.028 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:15.028 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@433 -- # return 0 00:07:15.028 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:15.028 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:15.028 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:15.028 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:15.028 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:15.028 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:15.028 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:15.028 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:15.028 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:15.028 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:15.028 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:15.028 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=64519 00:07:15.028 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 64519 00:07:15.028 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 64519 ']' 00:07:15.028 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:15.028 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:15.028 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:15.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:15.028 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:15.028 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:15.028 21:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:15.028 [2024-07-24 21:28:59.960020] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:07:15.028 [2024-07-24 21:28:59.960124] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:15.286 [2024-07-24 21:29:00.099835] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:15.286 [2024-07-24 21:29:00.232879] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:15.286 [2024-07-24 21:29:00.232957] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:15.286 [2024-07-24 21:29:00.232985] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:15.286 [2024-07-24 21:29:00.232993] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:15.286 [2024-07-24 21:29:00.233000] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:15.286 [2024-07-24 21:29:00.233875] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:15.286 [2024-07-24 21:29:00.233964] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:15.286 [2024-07-24 21:29:00.233969] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.543 [2024-07-24 21:29:00.307515] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:16.109 21:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:16.109 21:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:07:16.109 21:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:16.109 21:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:16.109 21:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:16.109 21:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:16.109 21:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:16.367 [2024-07-24 21:29:01.224729] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:16.367 21:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:16.625 21:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:16.625 21:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:16.883 21:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:16.883 21:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:17.141 21:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:17.399 21:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=096c56a6-e711-4a93-8608-0e9ae23bfef9 00:07:17.399 21:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 096c56a6-e711-4a93-8608-0e9ae23bfef9 lvol 20 00:07:17.657 21:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=7f74f9fe-c374-4ff9-8572-57f7cbeb8b12 00:07:17.657 21:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:17.915 21:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 7f74f9fe-c374-4ff9-8572-57f7cbeb8b12 00:07:18.173 21:29:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:18.431 [2024-07-24 21:29:03.285024] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:18.431 21:29:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:18.689 21:29:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=64596 00:07:18.690 21:29:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:18.690 21:29:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:19.629 21:29:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 7f74f9fe-c374-4ff9-8572-57f7cbeb8b12 MY_SNAPSHOT 00:07:19.898 21:29:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=f187bca6-ea6c-44b9-a49b-5df9b39285e6 00:07:19.898 21:29:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 7f74f9fe-c374-4ff9-8572-57f7cbeb8b12 30 00:07:20.156 21:29:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone f187bca6-ea6c-44b9-a49b-5df9b39285e6 MY_CLONE 00:07:20.415 21:29:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=b832bdf4-146c-4aa0-a3a4-ee7dee0e2ee8 00:07:20.415 21:29:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate b832bdf4-146c-4aa0-a3a4-ee7dee0e2ee8 00:07:20.983 21:29:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 64596 00:07:29.093 Initializing NVMe Controllers 00:07:29.093 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:29.093 Controller IO queue size 128, less than required. 00:07:29.093 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:29.093 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:07:29.093 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:07:29.093 Initialization complete. Launching workers. 00:07:29.093 ======================================================== 00:07:29.093 Latency(us) 00:07:29.093 Device Information : IOPS MiB/s Average min max 00:07:29.093 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 11333.40 44.27 11295.44 3298.03 53534.84 00:07:29.093 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 11367.40 44.40 11259.10 3468.06 60016.81 00:07:29.093 ======================================================== 00:07:29.093 Total : 22700.80 88.67 11277.24 3298.03 60016.81 00:07:29.093 00:07:29.093 21:29:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:29.093 21:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 7f74f9fe-c374-4ff9-8572-57f7cbeb8b12 00:07:29.352 21:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 096c56a6-e711-4a93-8608-0e9ae23bfef9 00:07:29.610 21:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:07:29.610 21:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:07:29.610 21:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:07:29.610 21:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:29.610 21:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:07:29.610 21:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:29.610 21:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:07:29.610 21:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:29.610 21:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:29.869 rmmod nvme_tcp 00:07:29.869 rmmod nvme_fabrics 00:07:29.869 rmmod nvme_keyring 00:07:29.869 21:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:29.869 21:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:07:29.869 21:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:07:29.869 21:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 64519 ']' 00:07:29.869 21:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 64519 00:07:29.869 21:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 64519 ']' 00:07:29.869 21:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 64519 00:07:29.869 21:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:07:29.869 21:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:29.869 21:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64519 00:07:29.869 killing process with pid 64519 00:07:29.869 21:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:29.869 21:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:29.869 21:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64519' 00:07:29.869 21:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 64519 00:07:29.869 21:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 64519 00:07:30.128 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:30.128 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:30.128 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:30.128 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:30.128 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:30.128 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:30.128 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:30.128 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:30.128 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:07:30.128 ************************************ 00:07:30.128 END TEST nvmf_lvol 00:07:30.128 ************************************ 00:07:30.128 00:07:30.128 real 0m15.724s 00:07:30.128 user 1m4.915s 00:07:30.128 sys 0m4.078s 00:07:30.128 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:30.128 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:30.128 21:29:15 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:30.128 21:29:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:30.128 21:29:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:30.128 21:29:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:30.388 ************************************ 00:07:30.388 START TEST nvmf_lvs_grow 00:07:30.388 ************************************ 00:07:30.388 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:30.388 * Looking for test storage... 00:07:30.388 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:30.388 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:30.388 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:07:30.388 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:30.388 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:30.388 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:30.388 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:30.388 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:30.388 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:30.388 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:30.388 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:30.388 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:30.388 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:30.388 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:07:30.388 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:07:30.388 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:30.388 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:30.388 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:30.388 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:30.388 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:30.388 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:30.388 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:30.388 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:30.388 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.388 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.388 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.388 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:07:30.388 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.388 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:07:30.388 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:30.388 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:30.388 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:30.388 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:30.388 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:30.388 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:30.388 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:30.388 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:30.388 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:30.388 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:30.389 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:07:30.389 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:30.389 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:30.389 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:30.389 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:30.389 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:30.389 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:30.389 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:30.389 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:30.389 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:07:30.389 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:07:30.389 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:07:30.389 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:07:30.389 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:07:30.389 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # nvmf_veth_init 00:07:30.389 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:30.389 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:30.389 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:30.389 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:30.389 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:30.389 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:30.389 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:30.389 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:30.389 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:30.389 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:30.389 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:30.389 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:30.389 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:30.389 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:30.389 Cannot find device "nvmf_tgt_br" 00:07:30.389 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@155 -- # true 00:07:30.389 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:30.389 Cannot find device "nvmf_tgt_br2" 00:07:30.389 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # true 00:07:30.389 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:30.389 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:30.389 Cannot find device "nvmf_tgt_br" 00:07:30.389 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # true 00:07:30.389 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:30.389 Cannot find device "nvmf_tgt_br2" 00:07:30.389 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # true 00:07:30.389 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:30.389 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:30.648 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:30.648 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:30.648 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:07:30.648 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:30.648 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:30.648 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:07:30.648 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:30.648 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:30.648 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:30.648 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:30.648 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:30.648 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:30.648 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:30.648 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:30.648 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:30.648 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:30.648 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:30.648 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:30.648 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:30.648 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:30.648 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:30.648 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:30.648 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:30.648 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:30.648 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:30.648 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:30.648 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:30.648 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:30.648 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:30.648 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:30.648 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:30.648 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.117 ms 00:07:30.648 00:07:30.648 --- 10.0.0.2 ping statistics --- 00:07:30.648 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:30.648 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:07:30.648 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:30.648 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:30.648 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:07:30.648 00:07:30.648 --- 10.0.0.3 ping statistics --- 00:07:30.648 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:30.648 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:07:30.648 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:30.648 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:30.648 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:07:30.648 00:07:30.648 --- 10.0.0.1 ping statistics --- 00:07:30.648 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:30.648 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:07:30.648 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:30.648 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@433 -- # return 0 00:07:30.648 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:30.649 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:30.649 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:30.649 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:30.649 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:30.649 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:30.649 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:30.908 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:07:30.908 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:30.908 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:30.908 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:30.908 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=64918 00:07:30.908 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 64918 00:07:30.908 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:30.908 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 64918 ']' 00:07:30.908 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:30.908 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:30.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:30.908 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:30.908 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:30.908 21:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:30.908 [2024-07-24 21:29:15.718980] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:07:30.908 [2024-07-24 21:29:15.719072] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:30.908 [2024-07-24 21:29:15.860522] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.192 [2024-07-24 21:29:15.985649] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:31.192 [2024-07-24 21:29:15.985755] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:31.192 [2024-07-24 21:29:15.985765] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:31.192 [2024-07-24 21:29:15.985773] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:31.192 [2024-07-24 21:29:15.985779] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:31.192 [2024-07-24 21:29:15.985807] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.192 [2024-07-24 21:29:16.057107] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:31.758 21:29:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:31.758 21:29:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:07:31.758 21:29:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:31.758 21:29:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:31.758 21:29:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:31.758 21:29:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:31.758 21:29:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:32.018 [2024-07-24 21:29:16.967437] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:32.018 21:29:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:07:32.018 21:29:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:32.018 21:29:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:32.018 21:29:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:32.018 ************************************ 00:07:32.018 START TEST lvs_grow_clean 00:07:32.018 ************************************ 00:07:32.018 21:29:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:07:32.018 21:29:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:32.018 21:29:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:32.018 21:29:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:32.018 21:29:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:32.018 21:29:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:32.018 21:29:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:32.018 21:29:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:32.018 21:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:32.018 21:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:32.277 21:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:32.277 21:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:32.844 21:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=f075d4b3-708d-4ccd-b3d9-6d82fa7fd699 00:07:32.844 21:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f075d4b3-708d-4ccd-b3d9-6d82fa7fd699 00:07:32.844 21:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:32.844 21:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:32.844 21:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:32.844 21:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u f075d4b3-708d-4ccd-b3d9-6d82fa7fd699 lvol 150 00:07:33.103 21:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=677f0c4a-16f2-4469-a96c-495ee679da70 00:07:33.103 21:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:33.103 21:29:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:33.361 [2024-07-24 21:29:18.244555] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:33.361 [2024-07-24 21:29:18.244686] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:33.361 true 00:07:33.361 21:29:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f075d4b3-708d-4ccd-b3d9-6d82fa7fd699 00:07:33.361 21:29:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:33.619 21:29:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:33.619 21:29:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:33.877 21:29:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 677f0c4a-16f2-4469-a96c-495ee679da70 00:07:34.135 21:29:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:34.393 [2024-07-24 21:29:19.217147] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:34.393 21:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:34.651 21:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:34.651 21:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=65002 00:07:34.651 21:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:34.651 21:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 65002 /var/tmp/bdevperf.sock 00:07:34.651 21:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 65002 ']' 00:07:34.651 21:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:34.651 21:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:34.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:34.651 21:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:34.651 21:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:34.651 21:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:34.651 [2024-07-24 21:29:19.506304] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:07:34.651 [2024-07-24 21:29:19.506389] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65002 ] 00:07:34.651 [2024-07-24 21:29:19.642818] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.909 [2024-07-24 21:29:19.792604] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:34.909 [2024-07-24 21:29:19.862868] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:35.845 21:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:35.845 21:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:07:35.845 21:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:35.845 Nvme0n1 00:07:35.845 21:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:36.104 [ 00:07:36.104 { 00:07:36.104 "name": "Nvme0n1", 00:07:36.104 "aliases": [ 00:07:36.104 "677f0c4a-16f2-4469-a96c-495ee679da70" 00:07:36.104 ], 00:07:36.104 "product_name": "NVMe disk", 00:07:36.104 "block_size": 4096, 00:07:36.104 "num_blocks": 38912, 00:07:36.104 "uuid": "677f0c4a-16f2-4469-a96c-495ee679da70", 00:07:36.104 "assigned_rate_limits": { 00:07:36.104 "rw_ios_per_sec": 0, 00:07:36.104 "rw_mbytes_per_sec": 0, 00:07:36.104 "r_mbytes_per_sec": 0, 00:07:36.104 "w_mbytes_per_sec": 0 00:07:36.104 }, 00:07:36.104 "claimed": false, 00:07:36.104 "zoned": false, 00:07:36.104 "supported_io_types": { 00:07:36.104 "read": true, 00:07:36.104 "write": true, 00:07:36.104 "unmap": true, 00:07:36.104 "flush": true, 00:07:36.104 "reset": true, 00:07:36.104 "nvme_admin": true, 00:07:36.104 "nvme_io": true, 00:07:36.104 "nvme_io_md": false, 00:07:36.104 "write_zeroes": true, 00:07:36.104 "zcopy": false, 00:07:36.104 "get_zone_info": false, 00:07:36.104 "zone_management": false, 00:07:36.104 "zone_append": false, 00:07:36.104 "compare": true, 00:07:36.104 "compare_and_write": true, 00:07:36.104 "abort": true, 00:07:36.104 "seek_hole": false, 00:07:36.104 "seek_data": false, 00:07:36.104 "copy": true, 00:07:36.104 "nvme_iov_md": false 00:07:36.104 }, 00:07:36.104 "memory_domains": [ 00:07:36.104 { 00:07:36.104 "dma_device_id": "system", 00:07:36.104 "dma_device_type": 1 00:07:36.104 } 00:07:36.104 ], 00:07:36.104 "driver_specific": { 00:07:36.104 "nvme": [ 00:07:36.104 { 00:07:36.104 "trid": { 00:07:36.104 "trtype": "TCP", 00:07:36.104 "adrfam": "IPv4", 00:07:36.104 "traddr": "10.0.0.2", 00:07:36.104 "trsvcid": "4420", 00:07:36.104 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:36.104 }, 00:07:36.104 "ctrlr_data": { 00:07:36.104 "cntlid": 1, 00:07:36.104 "vendor_id": "0x8086", 00:07:36.104 "model_number": "SPDK bdev Controller", 00:07:36.104 "serial_number": "SPDK0", 00:07:36.104 "firmware_revision": "24.09", 00:07:36.104 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:36.104 "oacs": { 00:07:36.104 "security": 0, 00:07:36.104 "format": 0, 00:07:36.104 "firmware": 0, 00:07:36.104 "ns_manage": 0 00:07:36.104 }, 00:07:36.104 "multi_ctrlr": true, 00:07:36.104 "ana_reporting": false 00:07:36.104 }, 00:07:36.104 "vs": { 00:07:36.104 "nvme_version": "1.3" 00:07:36.104 }, 00:07:36.104 "ns_data": { 00:07:36.104 "id": 1, 00:07:36.104 "can_share": true 00:07:36.104 } 00:07:36.104 } 00:07:36.104 ], 00:07:36.104 "mp_policy": "active_passive" 00:07:36.104 } 00:07:36.104 } 00:07:36.104 ] 00:07:36.104 21:29:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=65024 00:07:36.104 21:29:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:36.104 21:29:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:36.363 Running I/O for 10 seconds... 00:07:37.321 Latency(us) 00:07:37.321 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:37.321 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:37.321 Nvme0n1 : 1.00 7874.00 30.76 0.00 0.00 0.00 0.00 0.00 00:07:37.321 =================================================================================================================== 00:07:37.321 Total : 7874.00 30.76 0.00 0.00 0.00 0.00 0.00 00:07:37.321 00:07:38.257 21:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u f075d4b3-708d-4ccd-b3d9-6d82fa7fd699 00:07:38.257 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:38.257 Nvme0n1 : 2.00 7620.00 29.77 0.00 0.00 0.00 0.00 0.00 00:07:38.257 =================================================================================================================== 00:07:38.257 Total : 7620.00 29.77 0.00 0.00 0.00 0.00 0.00 00:07:38.257 00:07:38.523 true 00:07:38.523 21:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:38.523 21:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f075d4b3-708d-4ccd-b3d9-6d82fa7fd699 00:07:38.787 21:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:38.787 21:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:38.787 21:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 65024 00:07:39.355 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:39.355 Nvme0n1 : 3.00 7493.00 29.27 0.00 0.00 0.00 0.00 0.00 00:07:39.355 =================================================================================================================== 00:07:39.355 Total : 7493.00 29.27 0.00 0.00 0.00 0.00 0.00 00:07:39.355 00:07:40.291 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:40.291 Nvme0n1 : 4.00 7461.25 29.15 0.00 0.00 0.00 0.00 0.00 00:07:40.291 =================================================================================================================== 00:07:40.291 Total : 7461.25 29.15 0.00 0.00 0.00 0.00 0.00 00:07:40.291 00:07:41.227 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:41.227 Nvme0n1 : 5.00 7442.20 29.07 0.00 0.00 0.00 0.00 0.00 00:07:41.227 =================================================================================================================== 00:07:41.227 Total : 7442.20 29.07 0.00 0.00 0.00 0.00 0.00 00:07:41.227 00:07:42.163 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:42.163 Nvme0n1 : 6.00 7450.67 29.10 0.00 0.00 0.00 0.00 0.00 00:07:42.163 =================================================================================================================== 00:07:42.163 Total : 7450.67 29.10 0.00 0.00 0.00 0.00 0.00 00:07:42.163 00:07:43.541 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:43.541 Nvme0n1 : 7.00 7493.00 29.27 0.00 0.00 0.00 0.00 0.00 00:07:43.541 =================================================================================================================== 00:07:43.541 Total : 7493.00 29.27 0.00 0.00 0.00 0.00 0.00 00:07:43.541 00:07:44.478 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:44.478 Nvme0n1 : 8.00 7493.00 29.27 0.00 0.00 0.00 0.00 0.00 00:07:44.478 =================================================================================================================== 00:07:44.478 Total : 7493.00 29.27 0.00 0.00 0.00 0.00 0.00 00:07:44.478 00:07:45.463 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:45.463 Nvme0n1 : 9.00 7493.00 29.27 0.00 0.00 0.00 0.00 0.00 00:07:45.463 =================================================================================================================== 00:07:45.463 Total : 7493.00 29.27 0.00 0.00 0.00 0.00 0.00 00:07:45.463 00:07:46.398 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:46.398 Nvme0n1 : 10.00 7505.70 29.32 0.00 0.00 0.00 0.00 0.00 00:07:46.398 =================================================================================================================== 00:07:46.398 Total : 7505.70 29.32 0.00 0.00 0.00 0.00 0.00 00:07:46.398 00:07:46.398 00:07:46.398 Latency(us) 00:07:46.398 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:46.398 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:46.398 Nvme0n1 : 10.00 7515.09 29.36 0.00 0.00 17027.95 12988.04 38606.66 00:07:46.398 =================================================================================================================== 00:07:46.398 Total : 7515.09 29.36 0.00 0.00 17027.95 12988.04 38606.66 00:07:46.398 0 00:07:46.398 21:29:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 65002 00:07:46.398 21:29:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 65002 ']' 00:07:46.398 21:29:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 65002 00:07:46.398 21:29:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:07:46.398 21:29:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:46.398 21:29:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65002 00:07:46.398 21:29:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:46.398 killing process with pid 65002 00:07:46.398 21:29:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:46.398 21:29:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65002' 00:07:46.398 Received shutdown signal, test time was about 10.000000 seconds 00:07:46.398 00:07:46.398 Latency(us) 00:07:46.398 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:46.398 =================================================================================================================== 00:07:46.398 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:46.398 21:29:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 65002 00:07:46.398 21:29:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 65002 00:07:46.657 21:29:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:46.915 21:29:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:47.173 21:29:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f075d4b3-708d-4ccd-b3d9-6d82fa7fd699 00:07:47.173 21:29:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:47.431 21:29:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:47.431 21:29:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:07:47.431 21:29:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:47.690 [2024-07-24 21:29:32.494167] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:47.690 21:29:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f075d4b3-708d-4ccd-b3d9-6d82fa7fd699 00:07:47.690 21:29:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:07:47.690 21:29:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f075d4b3-708d-4ccd-b3d9-6d82fa7fd699 00:07:47.690 21:29:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:47.690 21:29:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:47.690 21:29:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:47.690 21:29:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:47.690 21:29:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:47.690 21:29:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:47.690 21:29:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:47.690 21:29:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:47.690 21:29:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f075d4b3-708d-4ccd-b3d9-6d82fa7fd699 00:07:47.948 request: 00:07:47.948 { 00:07:47.948 "uuid": "f075d4b3-708d-4ccd-b3d9-6d82fa7fd699", 00:07:47.948 "method": "bdev_lvol_get_lvstores", 00:07:47.948 "req_id": 1 00:07:47.948 } 00:07:47.948 Got JSON-RPC error response 00:07:47.948 response: 00:07:47.948 { 00:07:47.948 "code": -19, 00:07:47.948 "message": "No such device" 00:07:47.948 } 00:07:47.948 21:29:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:07:47.948 21:29:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:47.948 21:29:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:47.948 21:29:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:47.948 21:29:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:48.206 aio_bdev 00:07:48.206 21:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 677f0c4a-16f2-4469-a96c-495ee679da70 00:07:48.206 21:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=677f0c4a-16f2-4469-a96c-495ee679da70 00:07:48.206 21:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:48.206 21:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:07:48.206 21:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:48.206 21:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:48.206 21:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:48.465 21:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 677f0c4a-16f2-4469-a96c-495ee679da70 -t 2000 00:07:48.726 [ 00:07:48.726 { 00:07:48.726 "name": "677f0c4a-16f2-4469-a96c-495ee679da70", 00:07:48.726 "aliases": [ 00:07:48.726 "lvs/lvol" 00:07:48.726 ], 00:07:48.726 "product_name": "Logical Volume", 00:07:48.726 "block_size": 4096, 00:07:48.726 "num_blocks": 38912, 00:07:48.726 "uuid": "677f0c4a-16f2-4469-a96c-495ee679da70", 00:07:48.726 "assigned_rate_limits": { 00:07:48.726 "rw_ios_per_sec": 0, 00:07:48.726 "rw_mbytes_per_sec": 0, 00:07:48.726 "r_mbytes_per_sec": 0, 00:07:48.726 "w_mbytes_per_sec": 0 00:07:48.726 }, 00:07:48.726 "claimed": false, 00:07:48.726 "zoned": false, 00:07:48.726 "supported_io_types": { 00:07:48.726 "read": true, 00:07:48.726 "write": true, 00:07:48.726 "unmap": true, 00:07:48.726 "flush": false, 00:07:48.726 "reset": true, 00:07:48.726 "nvme_admin": false, 00:07:48.726 "nvme_io": false, 00:07:48.726 "nvme_io_md": false, 00:07:48.726 "write_zeroes": true, 00:07:48.726 "zcopy": false, 00:07:48.726 "get_zone_info": false, 00:07:48.726 "zone_management": false, 00:07:48.726 "zone_append": false, 00:07:48.726 "compare": false, 00:07:48.726 "compare_and_write": false, 00:07:48.726 "abort": false, 00:07:48.726 "seek_hole": true, 00:07:48.726 "seek_data": true, 00:07:48.726 "copy": false, 00:07:48.726 "nvme_iov_md": false 00:07:48.726 }, 00:07:48.726 "driver_specific": { 00:07:48.726 "lvol": { 00:07:48.726 "lvol_store_uuid": "f075d4b3-708d-4ccd-b3d9-6d82fa7fd699", 00:07:48.726 "base_bdev": "aio_bdev", 00:07:48.726 "thin_provision": false, 00:07:48.726 "num_allocated_clusters": 38, 00:07:48.726 "snapshot": false, 00:07:48.726 "clone": false, 00:07:48.726 "esnap_clone": false 00:07:48.726 } 00:07:48.726 } 00:07:48.726 } 00:07:48.726 ] 00:07:48.726 21:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:07:48.726 21:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f075d4b3-708d-4ccd-b3d9-6d82fa7fd699 00:07:48.726 21:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:48.983 21:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:48.983 21:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:48.983 21:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f075d4b3-708d-4ccd-b3d9-6d82fa7fd699 00:07:49.242 21:29:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:49.242 21:29:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 677f0c4a-16f2-4469-a96c-495ee679da70 00:07:49.499 21:29:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f075d4b3-708d-4ccd-b3d9-6d82fa7fd699 00:07:49.757 21:29:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:50.015 21:29:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:50.273 ************************************ 00:07:50.273 END TEST lvs_grow_clean 00:07:50.273 ************************************ 00:07:50.273 00:07:50.273 real 0m18.166s 00:07:50.273 user 0m16.870s 00:07:50.273 sys 0m2.730s 00:07:50.273 21:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:50.273 21:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:50.273 21:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:07:50.273 21:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:50.273 21:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:50.273 21:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:50.273 ************************************ 00:07:50.273 START TEST lvs_grow_dirty 00:07:50.273 ************************************ 00:07:50.273 21:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:07:50.273 21:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:50.273 21:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:50.273 21:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:50.273 21:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:50.273 21:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:50.273 21:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:50.273 21:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:50.273 21:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:50.273 21:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:50.840 21:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:50.840 21:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:50.840 21:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=accbc72d-e15e-42b1-b10c-0bfee0e037cc 00:07:50.840 21:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u accbc72d-e15e-42b1-b10c-0bfee0e037cc 00:07:50.840 21:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:51.098 21:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:51.098 21:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:51.098 21:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u accbc72d-e15e-42b1-b10c-0bfee0e037cc lvol 150 00:07:51.357 21:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=047e6f80-aab2-4c38-ae21-86ef40f85596 00:07:51.357 21:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:51.357 21:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:51.616 [2024-07-24 21:29:36.515514] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:51.616 [2024-07-24 21:29:36.515604] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:51.616 true 00:07:51.616 21:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u accbc72d-e15e-42b1-b10c-0bfee0e037cc 00:07:51.616 21:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:51.875 21:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:51.875 21:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:52.134 21:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 047e6f80-aab2-4c38-ae21-86ef40f85596 00:07:52.392 21:29:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:52.392 [2024-07-24 21:29:37.340015] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:52.392 21:29:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:52.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:52.651 21:29:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:52.651 21:29:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=65270 00:07:52.651 21:29:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:52.651 21:29:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 65270 /var/tmp/bdevperf.sock 00:07:52.651 21:29:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 65270 ']' 00:07:52.651 21:29:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:52.651 21:29:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:52.651 21:29:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:52.651 21:29:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:52.651 21:29:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:52.910 [2024-07-24 21:29:37.677404] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:07:52.911 [2024-07-24 21:29:37.677549] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65270 ] 00:07:52.911 [2024-07-24 21:29:37.813542] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.170 [2024-07-24 21:29:37.946200] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:53.170 [2024-07-24 21:29:38.019602] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:53.738 21:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:53.738 21:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:07:53.738 21:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:53.997 Nvme0n1 00:07:53.997 21:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:54.256 [ 00:07:54.256 { 00:07:54.256 "name": "Nvme0n1", 00:07:54.256 "aliases": [ 00:07:54.256 "047e6f80-aab2-4c38-ae21-86ef40f85596" 00:07:54.256 ], 00:07:54.256 "product_name": "NVMe disk", 00:07:54.256 "block_size": 4096, 00:07:54.256 "num_blocks": 38912, 00:07:54.256 "uuid": "047e6f80-aab2-4c38-ae21-86ef40f85596", 00:07:54.256 "assigned_rate_limits": { 00:07:54.256 "rw_ios_per_sec": 0, 00:07:54.256 "rw_mbytes_per_sec": 0, 00:07:54.256 "r_mbytes_per_sec": 0, 00:07:54.256 "w_mbytes_per_sec": 0 00:07:54.256 }, 00:07:54.256 "claimed": false, 00:07:54.256 "zoned": false, 00:07:54.256 "supported_io_types": { 00:07:54.256 "read": true, 00:07:54.256 "write": true, 00:07:54.256 "unmap": true, 00:07:54.256 "flush": true, 00:07:54.256 "reset": true, 00:07:54.256 "nvme_admin": true, 00:07:54.256 "nvme_io": true, 00:07:54.256 "nvme_io_md": false, 00:07:54.256 "write_zeroes": true, 00:07:54.256 "zcopy": false, 00:07:54.256 "get_zone_info": false, 00:07:54.256 "zone_management": false, 00:07:54.256 "zone_append": false, 00:07:54.256 "compare": true, 00:07:54.256 "compare_and_write": true, 00:07:54.256 "abort": true, 00:07:54.256 "seek_hole": false, 00:07:54.256 "seek_data": false, 00:07:54.256 "copy": true, 00:07:54.256 "nvme_iov_md": false 00:07:54.256 }, 00:07:54.256 "memory_domains": [ 00:07:54.256 { 00:07:54.256 "dma_device_id": "system", 00:07:54.256 "dma_device_type": 1 00:07:54.256 } 00:07:54.256 ], 00:07:54.256 "driver_specific": { 00:07:54.256 "nvme": [ 00:07:54.256 { 00:07:54.256 "trid": { 00:07:54.256 "trtype": "TCP", 00:07:54.256 "adrfam": "IPv4", 00:07:54.256 "traddr": "10.0.0.2", 00:07:54.256 "trsvcid": "4420", 00:07:54.256 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:54.257 }, 00:07:54.257 "ctrlr_data": { 00:07:54.257 "cntlid": 1, 00:07:54.257 "vendor_id": "0x8086", 00:07:54.257 "model_number": "SPDK bdev Controller", 00:07:54.257 "serial_number": "SPDK0", 00:07:54.257 "firmware_revision": "24.09", 00:07:54.257 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:54.257 "oacs": { 00:07:54.257 "security": 0, 00:07:54.257 "format": 0, 00:07:54.257 "firmware": 0, 00:07:54.257 "ns_manage": 0 00:07:54.257 }, 00:07:54.257 "multi_ctrlr": true, 00:07:54.257 "ana_reporting": false 00:07:54.257 }, 00:07:54.257 "vs": { 00:07:54.257 "nvme_version": "1.3" 00:07:54.257 }, 00:07:54.257 "ns_data": { 00:07:54.257 "id": 1, 00:07:54.257 "can_share": true 00:07:54.257 } 00:07:54.257 } 00:07:54.257 ], 00:07:54.257 "mp_policy": "active_passive" 00:07:54.257 } 00:07:54.257 } 00:07:54.257 ] 00:07:54.257 21:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=65294 00:07:54.257 21:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:54.257 21:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:54.516 Running I/O for 10 seconds... 00:07:55.454 Latency(us) 00:07:55.454 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:55.454 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:55.454 Nvme0n1 : 1.00 8255.00 32.25 0.00 0.00 0.00 0.00 0.00 00:07:55.454 =================================================================================================================== 00:07:55.454 Total : 8255.00 32.25 0.00 0.00 0.00 0.00 0.00 00:07:55.454 00:07:56.392 21:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u accbc72d-e15e-42b1-b10c-0bfee0e037cc 00:07:56.392 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:56.392 Nvme0n1 : 2.00 8128.00 31.75 0.00 0.00 0.00 0.00 0.00 00:07:56.392 =================================================================================================================== 00:07:56.392 Total : 8128.00 31.75 0.00 0.00 0.00 0.00 0.00 00:07:56.392 00:07:56.652 true 00:07:56.652 21:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u accbc72d-e15e-42b1-b10c-0bfee0e037cc 00:07:56.652 21:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:56.911 21:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:56.911 21:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:56.911 21:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 65294 00:07:57.478 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:57.478 Nvme0n1 : 3.00 8085.67 31.58 0.00 0.00 0.00 0.00 0.00 00:07:57.478 =================================================================================================================== 00:07:57.479 Total : 8085.67 31.58 0.00 0.00 0.00 0.00 0.00 00:07:57.479 00:07:58.415 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:58.415 Nvme0n1 : 4.00 8032.75 31.38 0.00 0.00 0.00 0.00 0.00 00:07:58.415 =================================================================================================================== 00:07:58.415 Total : 8032.75 31.38 0.00 0.00 0.00 0.00 0.00 00:07:58.415 00:07:59.352 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:59.352 Nvme0n1 : 5.00 8001.00 31.25 0.00 0.00 0.00 0.00 0.00 00:07:59.352 =================================================================================================================== 00:07:59.352 Total : 8001.00 31.25 0.00 0.00 0.00 0.00 0.00 00:07:59.352 00:08:00.731 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:00.731 Nvme0n1 : 6.00 7958.67 31.09 0.00 0.00 0.00 0.00 0.00 00:08:00.731 =================================================================================================================== 00:08:00.731 Total : 7958.67 31.09 0.00 0.00 0.00 0.00 0.00 00:08:00.731 00:08:01.668 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:01.668 Nvme0n1 : 7.00 7699.00 30.07 0.00 0.00 0.00 0.00 0.00 00:08:01.668 =================================================================================================================== 00:08:01.668 Total : 7699.00 30.07 0.00 0.00 0.00 0.00 0.00 00:08:01.668 00:08:02.605 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:02.605 Nvme0n1 : 8.00 7609.75 29.73 0.00 0.00 0.00 0.00 0.00 00:08:02.605 =================================================================================================================== 00:08:02.605 Total : 7609.75 29.73 0.00 0.00 0.00 0.00 0.00 00:08:02.605 00:08:03.543 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:03.543 Nvme0n1 : 9.00 7554.44 29.51 0.00 0.00 0.00 0.00 0.00 00:08:03.543 =================================================================================================================== 00:08:03.543 Total : 7554.44 29.51 0.00 0.00 0.00 0.00 0.00 00:08:03.543 00:08:04.486 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:04.486 Nvme0n1 : 10.00 7522.90 29.39 0.00 0.00 0.00 0.00 0.00 00:08:04.486 =================================================================================================================== 00:08:04.486 Total : 7522.90 29.39 0.00 0.00 0.00 0.00 0.00 00:08:04.486 00:08:04.486 00:08:04.486 Latency(us) 00:08:04.486 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:04.486 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:04.486 Nvme0n1 : 10.02 7522.83 29.39 0.00 0.00 17010.61 9413.35 203995.69 00:08:04.486 =================================================================================================================== 00:08:04.486 Total : 7522.83 29.39 0.00 0.00 17010.61 9413.35 203995.69 00:08:04.486 0 00:08:04.486 21:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 65270 00:08:04.486 21:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 65270 ']' 00:08:04.486 21:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 65270 00:08:04.486 21:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:08:04.486 21:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:04.486 21:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65270 00:08:04.486 killing process with pid 65270 00:08:04.486 Received shutdown signal, test time was about 10.000000 seconds 00:08:04.486 00:08:04.486 Latency(us) 00:08:04.487 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:04.487 =================================================================================================================== 00:08:04.487 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:04.487 21:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:04.487 21:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:04.487 21:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65270' 00:08:04.487 21:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 65270 00:08:04.487 21:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 65270 00:08:04.767 21:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:05.039 21:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:05.297 21:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u accbc72d-e15e-42b1-b10c-0bfee0e037cc 00:08:05.297 21:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:05.557 21:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:05.557 21:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:05.557 21:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 64918 00:08:05.557 21:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 64918 00:08:05.557 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 64918 Killed "${NVMF_APP[@]}" "$@" 00:08:05.557 21:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:05.557 21:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:05.557 21:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:05.557 21:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:05.557 21:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:05.557 21:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=65432 00:08:05.557 21:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 65432 00:08:05.557 21:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:05.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:05.557 21:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 65432 ']' 00:08:05.557 21:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:05.557 21:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:05.557 21:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:05.557 21:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:05.557 21:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:05.816 [2024-07-24 21:29:50.606459] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:08:05.816 [2024-07-24 21:29:50.606770] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:05.816 [2024-07-24 21:29:50.749054] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.075 [2024-07-24 21:29:50.834531] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:06.075 [2024-07-24 21:29:50.834590] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:06.075 [2024-07-24 21:29:50.834601] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:06.075 [2024-07-24 21:29:50.834609] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:06.075 [2024-07-24 21:29:50.834615] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:06.075 [2024-07-24 21:29:50.834685] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.075 [2024-07-24 21:29:50.903658] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:06.643 21:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:06.643 21:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:08:06.643 21:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:06.643 21:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:06.643 21:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:06.643 21:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:06.643 21:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:06.903 [2024-07-24 21:29:51.849871] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:06.903 [2024-07-24 21:29:51.850351] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:06.903 [2024-07-24 21:29:51.850718] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:06.903 21:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:06.903 21:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 047e6f80-aab2-4c38-ae21-86ef40f85596 00:08:06.903 21:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=047e6f80-aab2-4c38-ae21-86ef40f85596 00:08:06.903 21:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:06.903 21:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:08:06.903 21:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:06.903 21:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:06.903 21:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:07.162 21:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 047e6f80-aab2-4c38-ae21-86ef40f85596 -t 2000 00:08:07.422 [ 00:08:07.422 { 00:08:07.422 "name": "047e6f80-aab2-4c38-ae21-86ef40f85596", 00:08:07.422 "aliases": [ 00:08:07.422 "lvs/lvol" 00:08:07.422 ], 00:08:07.422 "product_name": "Logical Volume", 00:08:07.422 "block_size": 4096, 00:08:07.422 "num_blocks": 38912, 00:08:07.422 "uuid": "047e6f80-aab2-4c38-ae21-86ef40f85596", 00:08:07.422 "assigned_rate_limits": { 00:08:07.422 "rw_ios_per_sec": 0, 00:08:07.422 "rw_mbytes_per_sec": 0, 00:08:07.422 "r_mbytes_per_sec": 0, 00:08:07.422 "w_mbytes_per_sec": 0 00:08:07.422 }, 00:08:07.422 "claimed": false, 00:08:07.422 "zoned": false, 00:08:07.422 "supported_io_types": { 00:08:07.422 "read": true, 00:08:07.422 "write": true, 00:08:07.422 "unmap": true, 00:08:07.422 "flush": false, 00:08:07.422 "reset": true, 00:08:07.422 "nvme_admin": false, 00:08:07.422 "nvme_io": false, 00:08:07.422 "nvme_io_md": false, 00:08:07.422 "write_zeroes": true, 00:08:07.422 "zcopy": false, 00:08:07.422 "get_zone_info": false, 00:08:07.422 "zone_management": false, 00:08:07.422 "zone_append": false, 00:08:07.422 "compare": false, 00:08:07.422 "compare_and_write": false, 00:08:07.422 "abort": false, 00:08:07.422 "seek_hole": true, 00:08:07.422 "seek_data": true, 00:08:07.422 "copy": false, 00:08:07.422 "nvme_iov_md": false 00:08:07.422 }, 00:08:07.422 "driver_specific": { 00:08:07.422 "lvol": { 00:08:07.422 "lvol_store_uuid": "accbc72d-e15e-42b1-b10c-0bfee0e037cc", 00:08:07.422 "base_bdev": "aio_bdev", 00:08:07.422 "thin_provision": false, 00:08:07.422 "num_allocated_clusters": 38, 00:08:07.422 "snapshot": false, 00:08:07.422 "clone": false, 00:08:07.422 "esnap_clone": false 00:08:07.422 } 00:08:07.422 } 00:08:07.422 } 00:08:07.422 ] 00:08:07.422 21:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:08:07.422 21:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u accbc72d-e15e-42b1-b10c-0bfee0e037cc 00:08:07.422 21:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:07.681 21:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:07.681 21:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:07.681 21:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u accbc72d-e15e-42b1-b10c-0bfee0e037cc 00:08:07.940 21:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:07.940 21:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:08.200 [2024-07-24 21:29:53.059459] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:08.200 21:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u accbc72d-e15e-42b1-b10c-0bfee0e037cc 00:08:08.200 21:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:08:08.200 21:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u accbc72d-e15e-42b1-b10c-0bfee0e037cc 00:08:08.200 21:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:08.200 21:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:08.200 21:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:08.200 21:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:08.200 21:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:08.200 21:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:08.200 21:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:08.200 21:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:08.200 21:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u accbc72d-e15e-42b1-b10c-0bfee0e037cc 00:08:08.459 request: 00:08:08.459 { 00:08:08.459 "uuid": "accbc72d-e15e-42b1-b10c-0bfee0e037cc", 00:08:08.459 "method": "bdev_lvol_get_lvstores", 00:08:08.459 "req_id": 1 00:08:08.459 } 00:08:08.459 Got JSON-RPC error response 00:08:08.459 response: 00:08:08.459 { 00:08:08.459 "code": -19, 00:08:08.459 "message": "No such device" 00:08:08.459 } 00:08:08.459 21:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:08:08.459 21:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:08.459 21:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:08.459 21:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:08.459 21:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:08.718 aio_bdev 00:08:08.718 21:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 047e6f80-aab2-4c38-ae21-86ef40f85596 00:08:08.718 21:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=047e6f80-aab2-4c38-ae21-86ef40f85596 00:08:08.718 21:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:08.718 21:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:08:08.718 21:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:08.718 21:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:08.718 21:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:08.977 21:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 047e6f80-aab2-4c38-ae21-86ef40f85596 -t 2000 00:08:09.236 [ 00:08:09.236 { 00:08:09.236 "name": "047e6f80-aab2-4c38-ae21-86ef40f85596", 00:08:09.236 "aliases": [ 00:08:09.236 "lvs/lvol" 00:08:09.236 ], 00:08:09.236 "product_name": "Logical Volume", 00:08:09.236 "block_size": 4096, 00:08:09.236 "num_blocks": 38912, 00:08:09.236 "uuid": "047e6f80-aab2-4c38-ae21-86ef40f85596", 00:08:09.236 "assigned_rate_limits": { 00:08:09.236 "rw_ios_per_sec": 0, 00:08:09.236 "rw_mbytes_per_sec": 0, 00:08:09.236 "r_mbytes_per_sec": 0, 00:08:09.236 "w_mbytes_per_sec": 0 00:08:09.236 }, 00:08:09.236 "claimed": false, 00:08:09.236 "zoned": false, 00:08:09.236 "supported_io_types": { 00:08:09.236 "read": true, 00:08:09.236 "write": true, 00:08:09.236 "unmap": true, 00:08:09.236 "flush": false, 00:08:09.236 "reset": true, 00:08:09.236 "nvme_admin": false, 00:08:09.236 "nvme_io": false, 00:08:09.236 "nvme_io_md": false, 00:08:09.236 "write_zeroes": true, 00:08:09.236 "zcopy": false, 00:08:09.236 "get_zone_info": false, 00:08:09.236 "zone_management": false, 00:08:09.236 "zone_append": false, 00:08:09.236 "compare": false, 00:08:09.236 "compare_and_write": false, 00:08:09.236 "abort": false, 00:08:09.236 "seek_hole": true, 00:08:09.236 "seek_data": true, 00:08:09.236 "copy": false, 00:08:09.236 "nvme_iov_md": false 00:08:09.236 }, 00:08:09.236 "driver_specific": { 00:08:09.236 "lvol": { 00:08:09.236 "lvol_store_uuid": "accbc72d-e15e-42b1-b10c-0bfee0e037cc", 00:08:09.236 "base_bdev": "aio_bdev", 00:08:09.236 "thin_provision": false, 00:08:09.236 "num_allocated_clusters": 38, 00:08:09.236 "snapshot": false, 00:08:09.236 "clone": false, 00:08:09.236 "esnap_clone": false 00:08:09.236 } 00:08:09.236 } 00:08:09.236 } 00:08:09.236 ] 00:08:09.236 21:29:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:08:09.236 21:29:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u accbc72d-e15e-42b1-b10c-0bfee0e037cc 00:08:09.236 21:29:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:09.496 21:29:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:09.496 21:29:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u accbc72d-e15e-42b1-b10c-0bfee0e037cc 00:08:09.496 21:29:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:09.496 21:29:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:09.496 21:29:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 047e6f80-aab2-4c38-ae21-86ef40f85596 00:08:09.755 21:29:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u accbc72d-e15e-42b1-b10c-0bfee0e037cc 00:08:10.014 21:29:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:10.273 21:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:10.841 ************************************ 00:08:10.841 END TEST lvs_grow_dirty 00:08:10.841 ************************************ 00:08:10.841 00:08:10.841 real 0m20.361s 00:08:10.841 user 0m42.097s 00:08:10.841 sys 0m8.977s 00:08:10.841 21:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:10.841 21:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:10.841 21:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:10.841 21:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:08:10.841 21:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:08:10.841 21:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:08:10.841 21:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:10.841 21:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:08:10.841 21:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:08:10.841 21:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:08:10.841 21:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:10.841 nvmf_trace.0 00:08:10.841 21:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:08:10.841 21:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:10.841 21:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:10.841 21:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:08:11.100 21:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:11.100 21:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:08:11.100 21:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:11.100 21:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:11.100 rmmod nvme_tcp 00:08:11.100 rmmod nvme_fabrics 00:08:11.100 rmmod nvme_keyring 00:08:11.100 21:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:11.100 21:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:08:11.100 21:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:08:11.100 21:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 65432 ']' 00:08:11.101 21:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 65432 00:08:11.101 21:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 65432 ']' 00:08:11.101 21:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 65432 00:08:11.101 21:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:08:11.101 21:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:11.101 21:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65432 00:08:11.101 killing process with pid 65432 00:08:11.101 21:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:11.101 21:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:11.101 21:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65432' 00:08:11.101 21:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 65432 00:08:11.101 21:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 65432 00:08:11.359 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:11.360 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:11.360 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:11.360 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:11.360 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:11.360 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:11.360 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:11.360 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:11.360 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:11.360 ************************************ 00:08:11.360 END TEST nvmf_lvs_grow 00:08:11.360 ************************************ 00:08:11.360 00:08:11.360 real 0m41.101s 00:08:11.360 user 1m5.162s 00:08:11.360 sys 0m12.506s 00:08:11.360 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:11.360 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:11.360 21:29:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:11.360 21:29:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:11.360 21:29:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:11.360 21:29:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:11.360 ************************************ 00:08:11.360 START TEST nvmf_bdev_io_wait 00:08:11.360 ************************************ 00:08:11.360 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:11.619 * Looking for test storage... 00:08:11.619 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:11.619 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:11.619 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:08:11.619 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:11.619 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:11.619 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:11.619 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:11.619 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:11.619 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:11.619 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:11.619 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:11.619 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:11.619 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:11.619 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:08:11.619 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:08:11.619 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:11.619 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:11.619 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:11.619 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:11.619 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:11.619 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:11.619 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:11.619 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:11.619 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.619 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.619 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.619 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:08:11.619 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.619 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:08:11.619 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:11.619 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:11.619 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:11.619 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:11.619 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:11.619 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:11.619 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:11.619 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:11.619 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:11.619 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:11.619 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:11.619 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:11.619 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:11.619 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:11.619 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:11.619 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:11.619 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:11.619 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:11.619 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:11.619 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:11.619 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:11.620 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:11.620 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:11.620 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:11.620 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:11.620 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:11.620 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:11.620 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:11.620 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:11.620 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:11.620 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:11.620 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:11.620 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:11.620 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:11.620 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:11.620 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:11.620 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:11.620 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:11.620 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:11.620 Cannot find device "nvmf_tgt_br" 00:08:11.620 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # true 00:08:11.620 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:11.620 Cannot find device "nvmf_tgt_br2" 00:08:11.620 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # true 00:08:11.620 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:11.620 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:11.620 Cannot find device "nvmf_tgt_br" 00:08:11.620 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # true 00:08:11.620 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:11.620 Cannot find device "nvmf_tgt_br2" 00:08:11.620 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # true 00:08:11.620 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:11.620 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:11.620 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:11.620 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:11.620 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:08:11.620 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:11.620 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:11.620 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:08:11.620 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:11.620 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:11.620 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:11.620 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:11.620 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:11.620 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:11.620 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:11.620 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:11.879 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:11.879 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:11.879 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:11.879 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:11.879 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:11.879 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:11.879 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:11.879 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:11.879 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:11.879 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:11.879 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:11.879 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:11.879 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:11.879 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:11.879 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:11.879 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:11.879 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:11.879 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.105 ms 00:08:11.879 00:08:11.879 --- 10.0.0.2 ping statistics --- 00:08:11.879 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:11.879 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:08:11.879 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:11.879 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:11.879 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:08:11.879 00:08:11.879 --- 10.0.0.3 ping statistics --- 00:08:11.879 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:11.879 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:08:11.879 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:11.879 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:11.879 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:08:11.879 00:08:11.879 --- 10.0.0.1 ping statistics --- 00:08:11.879 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:11.879 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:08:11.879 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:11.879 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@433 -- # return 0 00:08:11.879 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:11.879 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:11.879 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:11.879 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:11.879 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:11.879 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:11.879 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:11.879 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:11.879 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:11.879 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:11.879 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:11.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:11.879 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=65743 00:08:11.879 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 65743 00:08:11.879 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 65743 ']' 00:08:11.879 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:11.879 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:11.879 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:11.879 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:11.879 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:11.879 21:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:11.879 [2024-07-24 21:29:56.823772] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:08:11.879 [2024-07-24 21:29:56.823860] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:12.138 [2024-07-24 21:29:56.961712] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:12.138 [2024-07-24 21:29:57.062187] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:12.138 [2024-07-24 21:29:57.062457] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:12.138 [2024-07-24 21:29:57.062601] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:12.138 [2024-07-24 21:29:57.062735] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:12.138 [2024-07-24 21:29:57.062789] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:12.138 [2024-07-24 21:29:57.063051] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:12.138 [2024-07-24 21:29:57.063200] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:12.138 [2024-07-24 21:29:57.063290] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.138 [2024-07-24 21:29:57.063291] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:13.092 21:29:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:13.092 21:29:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:08:13.092 21:29:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:13.092 21:29:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:13.092 21:29:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:13.092 21:29:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:13.092 21:29:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:13.093 21:29:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.093 21:29:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:13.093 21:29:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.093 21:29:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:13.093 21:29:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.093 21:29:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:13.093 [2024-07-24 21:29:57.950940] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:13.093 21:29:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.093 21:29:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:13.093 21:29:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.093 21:29:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:13.093 [2024-07-24 21:29:57.967912] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:13.093 21:29:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.093 21:29:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:13.093 21:29:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.093 21:29:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:13.093 Malloc0 00:08:13.093 21:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.093 21:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:13.093 21:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.093 21:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:13.093 21:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.093 21:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:13.093 21:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.093 21:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:13.093 21:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.093 21:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:13.093 21:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.093 21:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:13.093 [2024-07-24 21:29:58.034582] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:13.093 21:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.093 21:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=65778 00:08:13.093 21:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=65780 00:08:13.093 21:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:13.093 21:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:13.093 21:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:08:13.093 21:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:08:13.093 21:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:13.093 21:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=65782 00:08:13.093 21:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:13.093 { 00:08:13.093 "params": { 00:08:13.093 "name": "Nvme$subsystem", 00:08:13.093 "trtype": "$TEST_TRANSPORT", 00:08:13.093 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:13.093 "adrfam": "ipv4", 00:08:13.093 "trsvcid": "$NVMF_PORT", 00:08:13.093 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:13.093 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:13.093 "hdgst": ${hdgst:-false}, 00:08:13.093 "ddgst": ${ddgst:-false} 00:08:13.093 }, 00:08:13.093 "method": "bdev_nvme_attach_controller" 00:08:13.093 } 00:08:13.093 EOF 00:08:13.093 )") 00:08:13.093 21:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:13.093 21:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:13.093 21:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:08:13.093 21:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:08:13.093 21:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:13.093 21:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=65784 00:08:13.093 21:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:08:13.093 21:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:13.093 21:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:13.093 { 00:08:13.093 "params": { 00:08:13.093 "name": "Nvme$subsystem", 00:08:13.093 "trtype": "$TEST_TRANSPORT", 00:08:13.093 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:13.093 "adrfam": "ipv4", 00:08:13.093 "trsvcid": "$NVMF_PORT", 00:08:13.093 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:13.093 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:13.093 "hdgst": ${hdgst:-false}, 00:08:13.093 "ddgst": ${ddgst:-false} 00:08:13.093 }, 00:08:13.093 "method": "bdev_nvme_attach_controller" 00:08:13.093 } 00:08:13.093 EOF 00:08:13.093 )") 00:08:13.093 21:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:08:13.093 21:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:13.093 21:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:08:13.093 21:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:13.093 21:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:08:13.093 21:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:13.093 21:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:13.093 { 00:08:13.093 "params": { 00:08:13.093 "name": "Nvme$subsystem", 00:08:13.093 "trtype": "$TEST_TRANSPORT", 00:08:13.093 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:13.093 "adrfam": "ipv4", 00:08:13.093 "trsvcid": "$NVMF_PORT", 00:08:13.093 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:13.093 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:13.093 "hdgst": ${hdgst:-false}, 00:08:13.093 "ddgst": ${ddgst:-false} 00:08:13.093 }, 00:08:13.093 "method": "bdev_nvme_attach_controller" 00:08:13.093 } 00:08:13.093 EOF 00:08:13.093 )") 00:08:13.093 21:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:08:13.093 21:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:08:13.093 21:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:08:13.093 21:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:08:13.093 21:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:13.093 "params": { 00:08:13.093 "name": "Nvme1", 00:08:13.093 "trtype": "tcp", 00:08:13.093 "traddr": "10.0.0.2", 00:08:13.093 "adrfam": "ipv4", 00:08:13.093 "trsvcid": "4420", 00:08:13.093 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:13.093 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:13.093 "hdgst": false, 00:08:13.093 "ddgst": false 00:08:13.093 }, 00:08:13.093 "method": "bdev_nvme_attach_controller" 00:08:13.093 }' 00:08:13.093 21:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:08:13.093 21:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:08:13.093 21:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:13.093 "params": { 00:08:13.093 "name": "Nvme1", 00:08:13.093 "trtype": "tcp", 00:08:13.093 "traddr": "10.0.0.2", 00:08:13.093 "adrfam": "ipv4", 00:08:13.093 "trsvcid": "4420", 00:08:13.093 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:13.093 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:13.093 "hdgst": false, 00:08:13.093 "ddgst": false 00:08:13.093 }, 00:08:13.093 "method": "bdev_nvme_attach_controller" 00:08:13.093 }' 00:08:13.093 21:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:08:13.093 21:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:13.093 21:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:08:13.093 21:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:08:13.093 21:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:13.093 21:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:13.093 { 00:08:13.093 "params": { 00:08:13.093 "name": "Nvme$subsystem", 00:08:13.093 "trtype": "$TEST_TRANSPORT", 00:08:13.093 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:13.093 "adrfam": "ipv4", 00:08:13.093 "trsvcid": "$NVMF_PORT", 00:08:13.093 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:13.094 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:13.094 "hdgst": ${hdgst:-false}, 00:08:13.094 "ddgst": ${ddgst:-false} 00:08:13.094 }, 00:08:13.094 "method": "bdev_nvme_attach_controller" 00:08:13.094 } 00:08:13.094 EOF 00:08:13.094 )") 00:08:13.094 21:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:08:13.094 21:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:08:13.094 21:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:13.094 "params": { 00:08:13.094 "name": "Nvme1", 00:08:13.094 "trtype": "tcp", 00:08:13.094 "traddr": "10.0.0.2", 00:08:13.094 "adrfam": "ipv4", 00:08:13.094 "trsvcid": "4420", 00:08:13.094 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:13.094 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:13.094 "hdgst": false, 00:08:13.094 "ddgst": false 00:08:13.094 }, 00:08:13.094 "method": "bdev_nvme_attach_controller" 00:08:13.094 }' 00:08:13.094 21:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:08:13.377 21:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:08:13.377 21:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:13.377 "params": { 00:08:13.377 "name": "Nvme1", 00:08:13.377 "trtype": "tcp", 00:08:13.377 "traddr": "10.0.0.2", 00:08:13.377 "adrfam": "ipv4", 00:08:13.377 "trsvcid": "4420", 00:08:13.377 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:13.377 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:13.377 "hdgst": false, 00:08:13.377 "ddgst": false 00:08:13.377 }, 00:08:13.377 "method": "bdev_nvme_attach_controller" 00:08:13.377 }' 00:08:13.377 [2024-07-24 21:29:58.099872] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:08:13.377 [2024-07-24 21:29:58.100812] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:08:13.377 [2024-07-24 21:29:58.102949] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:08:13.377 21:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 65778 00:08:13.377 [2024-07-24 21:29:58.103220] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:08:13.377 [2024-07-24 21:29:58.126735] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:08:13.377 [2024-07-24 21:29:58.127000] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:08:13.377 [2024-07-24 21:29:58.129841] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:08:13.377 [2024-07-24 21:29:58.130102] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:08:13.377 [2024-07-24 21:29:58.339498] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.635 [2024-07-24 21:29:58.410946] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.635 [2024-07-24 21:29:58.477190] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:08:13.635 [2024-07-24 21:29:58.521375] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.635 [2024-07-24 21:29:58.529195] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:08:13.635 [2024-07-24 21:29:58.561102] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:13.635 [2024-07-24 21:29:58.607877] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:13.635 [2024-07-24 21:29:58.625542] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.893 [2024-07-24 21:29:58.636744] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:08:13.893 Running I/O for 1 seconds... 00:08:13.893 [2024-07-24 21:29:58.696504] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:13.893 Running I/O for 1 seconds... 00:08:13.893 [2024-07-24 21:29:58.752912] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:08:13.893 Running I/O for 1 seconds... 00:08:13.893 [2024-07-24 21:29:58.814950] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:14.152 Running I/O for 1 seconds... 00:08:14.719 00:08:14.719 Latency(us) 00:08:14.719 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:14.719 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:14.719 Nvme1n1 : 1.00 191433.94 747.79 0.00 0.00 666.24 322.09 889.95 00:08:14.719 =================================================================================================================== 00:08:14.719 Total : 191433.94 747.79 0.00 0.00 666.24 322.09 889.95 00:08:14.719 00:08:14.719 Latency(us) 00:08:14.719 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:14.719 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:14.719 Nvme1n1 : 1.03 5484.65 21.42 0.00 0.00 22912.78 8221.79 49092.42 00:08:14.719 =================================================================================================================== 00:08:14.719 Total : 5484.65 21.42 0.00 0.00 22912.78 8221.79 49092.42 00:08:14.978 00:08:14.978 Latency(us) 00:08:14.978 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:14.978 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:14.978 Nvme1n1 : 1.01 6252.87 24.43 0.00 0.00 20320.39 9651.67 31218.97 00:08:14.978 =================================================================================================================== 00:08:14.979 Total : 6252.87 24.43 0.00 0.00 20320.39 9651.67 31218.97 00:08:14.979 00:08:14.979 Latency(us) 00:08:14.979 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:14.979 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:14.979 Nvme1n1 : 1.01 5737.94 22.41 0.00 0.00 22237.85 5481.19 59578.18 00:08:14.979 =================================================================================================================== 00:08:14.979 Total : 5737.94 22.41 0.00 0.00 22237.85 5481.19 59578.18 00:08:14.979 21:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 65780 00:08:15.238 21:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 65782 00:08:15.238 21:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 65784 00:08:15.497 21:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:15.497 21:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.497 21:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:15.497 21:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.497 21:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:15.497 21:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:15.497 21:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:15.497 21:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:08:15.497 21:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:15.497 21:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:08:15.497 21:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:15.497 21:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:15.497 rmmod nvme_tcp 00:08:15.497 rmmod nvme_fabrics 00:08:15.497 rmmod nvme_keyring 00:08:15.497 21:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:15.497 21:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:08:15.497 21:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:08:15.497 21:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 65743 ']' 00:08:15.497 21:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 65743 00:08:15.497 21:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 65743 ']' 00:08:15.497 21:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 65743 00:08:15.497 21:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:08:15.497 21:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:15.497 21:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65743 00:08:15.497 killing process with pid 65743 00:08:15.497 21:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:15.497 21:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:15.497 21:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65743' 00:08:15.497 21:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 65743 00:08:15.497 21:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 65743 00:08:15.756 21:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:15.756 21:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:15.756 21:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:15.756 21:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:15.756 21:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:15.756 21:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:15.756 21:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:15.756 21:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:15.756 21:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:15.756 00:08:15.756 real 0m4.455s 00:08:15.756 user 0m19.771s 00:08:15.756 sys 0m2.318s 00:08:15.756 21:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:15.756 21:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:15.756 ************************************ 00:08:15.756 END TEST nvmf_bdev_io_wait 00:08:15.756 ************************************ 00:08:16.016 21:30:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:16.016 21:30:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:16.016 21:30:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:16.016 21:30:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:16.016 ************************************ 00:08:16.016 START TEST nvmf_queue_depth 00:08:16.016 ************************************ 00:08:16.016 21:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:16.016 * Looking for test storage... 00:08:16.016 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:16.016 21:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:16.016 21:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:08:16.016 21:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:16.016 21:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:16.016 21:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:16.016 21:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:16.016 21:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:16.016 21:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:16.016 21:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:16.016 21:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:16.016 21:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:16.016 21:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:16.016 21:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:08:16.016 21:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:08:16.016 21:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:16.016 21:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:16.016 21:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:16.016 21:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:16.016 21:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:16.016 21:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:16.016 21:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:16.016 21:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:16.016 21:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.016 21:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.016 21:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.016 21:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:08:16.016 21:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.016 21:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:08:16.016 21:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:16.016 21:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:16.016 21:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:16.017 21:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:16.017 21:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:16.017 21:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:16.017 21:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:16.017 21:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:16.017 21:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:16.017 21:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:16.017 21:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:16.017 21:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:16.017 21:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:16.017 21:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:16.017 21:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:16.017 21:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:16.017 21:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:16.017 21:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:16.017 21:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:16.017 21:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:16.017 21:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:16.017 21:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:16.017 21:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:16.017 21:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:16.017 21:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:16.017 21:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:16.017 21:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:16.017 21:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:16.017 21:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:16.017 21:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:16.017 21:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:16.017 21:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:16.017 21:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:16.017 21:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:16.017 21:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:16.017 21:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:16.017 21:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:16.017 21:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:16.017 21:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:16.017 21:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:16.017 Cannot find device "nvmf_tgt_br" 00:08:16.017 21:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@155 -- # true 00:08:16.017 21:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:16.017 Cannot find device "nvmf_tgt_br2" 00:08:16.017 21:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # true 00:08:16.017 21:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:16.017 21:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:16.017 Cannot find device "nvmf_tgt_br" 00:08:16.017 21:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # true 00:08:16.017 21:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:16.017 Cannot find device "nvmf_tgt_br2" 00:08:16.017 21:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # true 00:08:16.017 21:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:16.276 21:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:16.276 21:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:16.276 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:16.276 21:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:08:16.276 21:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:16.276 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:16.276 21:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:08:16.276 21:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:16.276 21:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:16.276 21:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:16.276 21:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:16.276 21:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:16.276 21:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:16.276 21:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:16.276 21:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:16.276 21:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:16.276 21:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:16.276 21:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:16.276 21:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:16.276 21:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:16.276 21:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:16.276 21:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:16.276 21:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:16.276 21:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:16.276 21:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:16.276 21:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:16.276 21:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:16.276 21:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:16.276 21:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:16.276 21:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:16.276 21:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:16.276 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:16.276 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:08:16.276 00:08:16.276 --- 10.0.0.2 ping statistics --- 00:08:16.276 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:16.276 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:08:16.276 21:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:16.276 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:16.276 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.037 ms 00:08:16.276 00:08:16.276 --- 10.0.0.3 ping statistics --- 00:08:16.276 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:16.276 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:08:16.276 21:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:16.276 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:16.276 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:08:16.276 00:08:16.276 --- 10.0.0.1 ping statistics --- 00:08:16.276 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:16.276 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:08:16.276 21:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:16.276 21:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@433 -- # return 0 00:08:16.276 21:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:16.276 21:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:16.276 21:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:16.276 21:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:16.276 21:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:16.276 21:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:16.276 21:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:16.276 21:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:16.276 21:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:16.276 21:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:16.276 21:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:16.276 21:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=66018 00:08:16.276 21:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:16.276 21:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 66018 00:08:16.276 21:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 66018 ']' 00:08:16.276 21:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:16.276 21:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:16.276 21:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:16.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:16.276 21:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:16.276 21:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:16.534 [2024-07-24 21:30:01.314242] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:08:16.534 [2024-07-24 21:30:01.314336] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:16.534 [2024-07-24 21:30:01.457189] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.791 [2024-07-24 21:30:01.563701] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:16.791 [2024-07-24 21:30:01.563771] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:16.791 [2024-07-24 21:30:01.563782] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:16.791 [2024-07-24 21:30:01.563790] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:16.791 [2024-07-24 21:30:01.563796] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:16.791 [2024-07-24 21:30:01.563824] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:16.791 [2024-07-24 21:30:01.633091] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:17.356 21:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:17.356 21:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:08:17.356 21:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:17.356 21:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:17.356 21:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:17.356 21:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:17.356 21:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:17.356 21:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.356 21:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:17.356 [2024-07-24 21:30:02.285711] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:17.356 21:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.356 21:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:17.356 21:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.356 21:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:17.356 Malloc0 00:08:17.356 21:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.356 21:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:17.356 21:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.356 21:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:17.356 21:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.356 21:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:17.356 21:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.356 21:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:17.356 21:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.356 21:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:17.356 21:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.356 21:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:17.356 [2024-07-24 21:30:02.350033] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:17.356 21:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.356 21:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=66050 00:08:17.613 21:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:17.613 21:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:17.613 21:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 66050 /var/tmp/bdevperf.sock 00:08:17.613 21:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 66050 ']' 00:08:17.613 21:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:17.613 21:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:17.613 21:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:17.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:17.613 21:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:17.613 21:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:17.613 [2024-07-24 21:30:02.412128] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:08:17.613 [2024-07-24 21:30:02.412220] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66050 ] 00:08:17.613 [2024-07-24 21:30:02.551833] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.872 [2024-07-24 21:30:02.684652] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.872 [2024-07-24 21:30:02.765341] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:18.438 21:30:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:18.438 21:30:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:08:18.438 21:30:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:08:18.438 21:30:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.438 21:30:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:18.438 NVMe0n1 00:08:18.438 21:30:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.438 21:30:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:18.695 Running I/O for 10 seconds... 00:08:28.665 00:08:28.665 Latency(us) 00:08:28.665 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:28.665 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:08:28.665 Verification LBA range: start 0x0 length 0x4000 00:08:28.665 NVMe0n1 : 10.06 9297.48 36.32 0.00 0.00 109640.14 11736.90 103427.72 00:08:28.665 =================================================================================================================== 00:08:28.665 Total : 9297.48 36.32 0.00 0.00 109640.14 11736.90 103427.72 00:08:28.665 0 00:08:28.665 21:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 66050 00:08:28.665 21:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 66050 ']' 00:08:28.665 21:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 66050 00:08:28.665 21:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:08:28.665 21:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:28.665 21:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 66050 00:08:28.665 21:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:28.665 killing process with pid 66050 00:08:28.665 Received shutdown signal, test time was about 10.000000 seconds 00:08:28.665 00:08:28.665 Latency(us) 00:08:28.665 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:28.665 =================================================================================================================== 00:08:28.665 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:28.665 21:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:28.665 21:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 66050' 00:08:28.665 21:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 66050 00:08:28.665 21:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 66050 00:08:28.930 21:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:28.930 21:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:08:28.930 21:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:28.930 21:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:08:29.203 21:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:29.203 21:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:08:29.203 21:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:29.203 21:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:29.203 rmmod nvme_tcp 00:08:29.203 rmmod nvme_fabrics 00:08:29.203 rmmod nvme_keyring 00:08:29.203 21:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:29.203 21:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:08:29.203 21:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:08:29.204 21:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 66018 ']' 00:08:29.204 21:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 66018 00:08:29.204 21:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 66018 ']' 00:08:29.204 21:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 66018 00:08:29.204 21:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:08:29.204 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:29.204 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 66018 00:08:29.204 killing process with pid 66018 00:08:29.204 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:29.204 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:29.204 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 66018' 00:08:29.204 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 66018 00:08:29.204 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 66018 00:08:29.464 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:29.464 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:29.464 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:29.464 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:29.464 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:29.464 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:29.464 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:29.464 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:29.464 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:29.464 00:08:29.464 real 0m13.592s 00:08:29.464 user 0m23.125s 00:08:29.464 sys 0m2.485s 00:08:29.464 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:29.464 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:29.464 ************************************ 00:08:29.464 END TEST nvmf_queue_depth 00:08:29.464 ************************************ 00:08:29.464 21:30:14 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:29.464 21:30:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:29.464 21:30:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:29.464 21:30:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:29.464 ************************************ 00:08:29.464 START TEST nvmf_target_multipath 00:08:29.464 ************************************ 00:08:29.464 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:29.723 * Looking for test storage... 00:08:29.723 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:29.723 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:29.723 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:08:29.723 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:29.723 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:29.723 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:29.723 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:29.723 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:29.723 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:29.723 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:29.723 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:29.723 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:29.723 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:29.723 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:08:29.723 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:08:29.723 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:29.723 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:29.723 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:29.723 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:29.723 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:29.723 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:29.723 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:29.723 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:29.723 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.724 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.724 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.724 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:08:29.724 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.724 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:08:29.724 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:29.724 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:29.724 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:29.724 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:29.724 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:29.724 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:29.724 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:29.724 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:29.724 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:29.724 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:29.724 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:08:29.724 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:29.724 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:08:29.724 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:29.724 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:29.724 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:29.724 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:29.724 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:29.724 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:29.724 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:29.724 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:29.724 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:29.724 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:29.724 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:29.724 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:29.724 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:29.724 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:29.724 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:29.724 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:29.724 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:29.724 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:29.724 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:29.724 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:29.724 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:29.724 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:29.724 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:29.724 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:29.724 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:29.724 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:29.724 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:29.724 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:29.724 Cannot find device "nvmf_tgt_br" 00:08:29.724 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@155 -- # true 00:08:29.724 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:29.724 Cannot find device "nvmf_tgt_br2" 00:08:29.724 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # true 00:08:29.724 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:29.724 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:29.724 Cannot find device "nvmf_tgt_br" 00:08:29.724 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # true 00:08:29.724 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:29.724 Cannot find device "nvmf_tgt_br2" 00:08:29.724 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # true 00:08:29.724 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:29.724 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:29.724 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:29.724 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:29.724 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:08:29.724 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:29.724 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:29.724 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:08:29.724 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:29.983 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:29.983 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:29.983 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:29.983 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:29.983 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:29.983 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:29.983 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:29.983 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:29.983 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:29.983 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:29.983 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:29.983 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:29.983 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:29.983 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:29.983 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:29.983 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:29.983 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:29.983 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:29.983 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:29.983 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:29.983 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:29.983 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:29.983 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:29.983 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:29.983 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.096 ms 00:08:29.983 00:08:29.983 --- 10.0.0.2 ping statistics --- 00:08:29.983 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:29.983 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:08:29.983 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:29.983 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:29.983 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:08:29.983 00:08:29.983 --- 10.0.0.3 ping statistics --- 00:08:29.983 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:29.983 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:08:29.983 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:29.983 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:29.983 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.041 ms 00:08:29.983 00:08:29.983 --- 10.0.0.1 ping statistics --- 00:08:29.983 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:29.983 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:08:29.983 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:29.983 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@433 -- # return 0 00:08:29.983 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:29.983 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:29.983 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:29.983 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:29.983 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:29.983 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:29.984 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:29.984 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:08:29.984 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:08:29.984 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:08:29.984 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:29.984 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:29.984 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:29.984 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@481 -- # nvmfpid=66376 00:08:29.984 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:29.984 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # waitforlisten 66376 00:08:29.984 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@831 -- # '[' -z 66376 ']' 00:08:29.984 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:29.984 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:29.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:29.984 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:29.984 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:29.984 21:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:30.243 [2024-07-24 21:30:15.014248] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:08:30.243 [2024-07-24 21:30:15.014337] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:30.243 [2024-07-24 21:30:15.152202] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:30.502 [2024-07-24 21:30:15.300698] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:30.502 [2024-07-24 21:30:15.300955] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:30.502 [2024-07-24 21:30:15.301114] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:30.502 [2024-07-24 21:30:15.301227] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:30.502 [2024-07-24 21:30:15.301347] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:30.502 [2024-07-24 21:30:15.301664] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:30.502 [2024-07-24 21:30:15.301809] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:30.502 [2024-07-24 21:30:15.302566] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.502 [2024-07-24 21:30:15.302522] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:30.502 [2024-07-24 21:30:15.379515] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:31.069 21:30:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:31.069 21:30:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@864 -- # return 0 00:08:31.069 21:30:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:31.069 21:30:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:31.069 21:30:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:31.069 21:30:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:31.070 21:30:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:31.328 [2024-07-24 21:30:16.313513] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:31.587 21:30:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:08:31.587 Malloc0 00:08:31.846 21:30:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:08:31.846 21:30:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:32.105 21:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:32.363 [2024-07-24 21:30:17.263051] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:32.363 21:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:08:32.622 [2024-07-24 21:30:17.471138] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:32.622 21:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --hostid=987211d5-ddc7-4d0a-8ba2-cf43288d1158 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:08:32.622 21:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --hostid=987211d5-ddc7-4d0a-8ba2-cf43288d1158 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:08:32.882 21:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:08:32.882 21:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1198 -- # local i=0 00:08:32.882 21:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:32.882 21:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:32.882 21:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1205 -- # sleep 2 00:08:34.786 21:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:34.786 21:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:34.786 21:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:34.786 21:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:34.786 21:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:34.786 21:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # return 0 00:08:34.786 21:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:08:34.786 21:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:08:34.786 21:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:08:34.786 21:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:34.786 21:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:08:34.786 21:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:08:34.786 21:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:08:34.786 21:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:08:34.786 21:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:08:34.786 21:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:08:34.786 21:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:08:34.786 21:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:08:34.786 21:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:08:34.786 21:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:08:34.786 21:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:08:34.786 21:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:34.786 21:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:34.786 21:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:34.786 21:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:34.786 21:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:08:34.786 21:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:08:34.786 21:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:34.786 21:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:34.786 21:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:34.786 21:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:34.786 21:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:08:34.786 21:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=66465 00:08:34.787 21:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:08:34.787 21:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:08:35.045 [global] 00:08:35.045 thread=1 00:08:35.045 invalidate=1 00:08:35.045 rw=randrw 00:08:35.045 time_based=1 00:08:35.045 runtime=6 00:08:35.045 ioengine=libaio 00:08:35.045 direct=1 00:08:35.045 bs=4096 00:08:35.045 iodepth=128 00:08:35.045 norandommap=0 00:08:35.045 numjobs=1 00:08:35.045 00:08:35.045 verify_dump=1 00:08:35.045 verify_backlog=512 00:08:35.045 verify_state_save=0 00:08:35.045 do_verify=1 00:08:35.045 verify=crc32c-intel 00:08:35.045 [job0] 00:08:35.045 filename=/dev/nvme0n1 00:08:35.045 Could not set queue depth (nvme0n1) 00:08:35.045 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:35.045 fio-3.35 00:08:35.045 Starting 1 thread 00:08:35.981 21:30:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:08:36.240 21:30:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:08:36.498 21:30:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:08:36.499 21:30:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:08:36.499 21:30:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:36.499 21:30:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:36.499 21:30:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:36.499 21:30:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:36.499 21:30:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:08:36.499 21:30:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:08:36.499 21:30:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:36.499 21:30:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:36.499 21:30:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:36.499 21:30:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:36.499 21:30:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:08:36.757 21:30:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:08:37.015 21:30:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:08:37.015 21:30:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:08:37.015 21:30:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:37.015 21:30:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:37.015 21:30:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:37.015 21:30:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:37.015 21:30:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:08:37.015 21:30:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:08:37.015 21:30:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:37.016 21:30:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:37.016 21:30:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:37.016 21:30:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:37.016 21:30:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 66465 00:08:41.207 00:08:41.207 job0: (groupid=0, jobs=1): err= 0: pid=66492: Wed Jul 24 21:30:26 2024 00:08:41.207 read: IOPS=11.0k, BW=43.1MiB/s (45.2MB/s)(259MiB/6006msec) 00:08:41.207 slat (usec): min=2, max=5854, avg=51.99, stdev=203.08 00:08:41.207 clat (usec): min=1473, max=14777, avg=7783.03, stdev=1287.20 00:08:41.207 lat (usec): min=1488, max=14786, avg=7835.02, stdev=1291.31 00:08:41.207 clat percentiles (usec): 00:08:41.207 | 1.00th=[ 4080], 5.00th=[ 6128], 10.00th=[ 6652], 20.00th=[ 7046], 00:08:41.207 | 30.00th=[ 7308], 40.00th=[ 7504], 50.00th=[ 7701], 60.00th=[ 7898], 00:08:41.207 | 70.00th=[ 8094], 80.00th=[ 8455], 90.00th=[ 8979], 95.00th=[10552], 00:08:41.207 | 99.00th=[11994], 99.50th=[12387], 99.90th=[13042], 99.95th=[13435], 00:08:41.207 | 99.99th=[14353] 00:08:41.207 bw ( KiB/s): min=10032, max=32184, per=52.96%, avg=23397.82, stdev=6250.82, samples=11 00:08:41.207 iops : min= 2508, max= 8046, avg=5849.45, stdev=1562.71, samples=11 00:08:41.207 write: IOPS=6800, BW=26.6MiB/s (27.9MB/s)(139MiB/5238msec); 0 zone resets 00:08:41.207 slat (usec): min=4, max=7197, avg=62.33, stdev=152.38 00:08:41.207 clat (usec): min=2549, max=14616, avg=6895.95, stdev=1186.25 00:08:41.207 lat (usec): min=2576, max=14639, avg=6958.29, stdev=1190.70 00:08:41.207 clat percentiles (usec): 00:08:41.207 | 1.00th=[ 3097], 5.00th=[ 4228], 10.00th=[ 5735], 20.00th=[ 6390], 00:08:41.207 | 30.00th=[ 6652], 40.00th=[ 6849], 50.00th=[ 7046], 60.00th=[ 7177], 00:08:41.207 | 70.00th=[ 7373], 80.00th=[ 7635], 90.00th=[ 7898], 95.00th=[ 8225], 00:08:41.207 | 99.00th=[10421], 99.50th=[11076], 99.90th=[12780], 99.95th=[13829], 00:08:41.207 | 99.99th=[14615] 00:08:41.207 bw ( KiB/s): min=10288, max=31592, per=86.26%, avg=23467.64, stdev=6039.77, samples=11 00:08:41.207 iops : min= 2572, max= 7898, avg=5866.91, stdev=1509.94, samples=11 00:08:41.207 lat (msec) : 2=0.01%, 4=2.00%, 10=93.42%, 20=4.57% 00:08:41.207 cpu : usr=5.58%, sys=22.58%, ctx=5938, majf=0, minf=121 00:08:41.207 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:08:41.207 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:41.207 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:41.207 issued rwts: total=66336,35623,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:41.207 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:41.207 00:08:41.207 Run status group 0 (all jobs): 00:08:41.207 READ: bw=43.1MiB/s (45.2MB/s), 43.1MiB/s-43.1MiB/s (45.2MB/s-45.2MB/s), io=259MiB (272MB), run=6006-6006msec 00:08:41.207 WRITE: bw=26.6MiB/s (27.9MB/s), 26.6MiB/s-26.6MiB/s (27.9MB/s-27.9MB/s), io=139MiB (146MB), run=5238-5238msec 00:08:41.207 00:08:41.207 Disk stats (read/write): 00:08:41.207 nvme0n1: ios=65370/34935, merge=0/0, ticks=486590/224922, in_queue=711512, util=98.66% 00:08:41.207 21:30:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:08:41.467 21:30:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:08:41.726 21:30:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:08:41.726 21:30:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:08:41.726 21:30:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:41.726 21:30:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:41.726 21:30:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:41.726 21:30:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:41.726 21:30:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:08:41.726 21:30:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:08:41.726 21:30:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:41.726 21:30:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:41.726 21:30:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:41.726 21:30:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:41.726 21:30:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:08:41.726 21:30:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:08:41.726 21:30:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=66566 00:08:41.726 21:30:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:08:41.726 [global] 00:08:41.726 thread=1 00:08:41.726 invalidate=1 00:08:41.726 rw=randrw 00:08:41.726 time_based=1 00:08:41.726 runtime=6 00:08:41.726 ioengine=libaio 00:08:41.726 direct=1 00:08:41.726 bs=4096 00:08:41.726 iodepth=128 00:08:41.726 norandommap=0 00:08:41.726 numjobs=1 00:08:41.726 00:08:41.726 verify_dump=1 00:08:41.726 verify_backlog=512 00:08:41.726 verify_state_save=0 00:08:41.726 do_verify=1 00:08:41.726 verify=crc32c-intel 00:08:41.726 [job0] 00:08:41.726 filename=/dev/nvme0n1 00:08:41.726 Could not set queue depth (nvme0n1) 00:08:41.985 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:41.985 fio-3.35 00:08:41.985 Starting 1 thread 00:08:42.922 21:30:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:08:43.181 21:30:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:08:43.440 21:30:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:08:43.441 21:30:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:08:43.441 21:30:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:43.441 21:30:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:43.441 21:30:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:43.441 21:30:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:43.441 21:30:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:08:43.441 21:30:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:08:43.441 21:30:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:43.441 21:30:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:43.441 21:30:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:43.441 21:30:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:43.441 21:30:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:08:43.700 21:30:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:08:43.700 21:30:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:08:43.700 21:30:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:08:43.700 21:30:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:43.700 21:30:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:43.700 21:30:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:43.700 21:30:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:43.700 21:30:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:08:43.700 21:30:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:08:43.700 21:30:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:43.700 21:30:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:43.700 21:30:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:43.700 21:30:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:43.700 21:30:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 66566 00:08:48.998 00:08:48.998 job0: (groupid=0, jobs=1): err= 0: pid=66587: Wed Jul 24 21:30:32 2024 00:08:48.999 read: IOPS=12.8k, BW=50.2MiB/s (52.6MB/s)(301MiB/6002msec) 00:08:48.999 slat (usec): min=4, max=7192, avg=40.48, stdev=172.05 00:08:48.999 clat (usec): min=577, max=16229, avg=6942.53, stdev=1691.69 00:08:48.999 lat (usec): min=588, max=16247, avg=6983.00, stdev=1704.48 00:08:48.999 clat percentiles (usec): 00:08:48.999 | 1.00th=[ 2573], 5.00th=[ 3884], 10.00th=[ 4555], 20.00th=[ 5604], 00:08:48.999 | 30.00th=[ 6521], 40.00th=[ 6915], 50.00th=[ 7242], 60.00th=[ 7439], 00:08:48.999 | 70.00th=[ 7635], 80.00th=[ 7898], 90.00th=[ 8455], 95.00th=[ 9634], 00:08:48.999 | 99.00th=[11600], 99.50th=[11994], 99.90th=[13173], 99.95th=[13829], 00:08:48.999 | 99.99th=[15533] 00:08:48.999 bw ( KiB/s): min= 5216, max=44784, per=52.19%, avg=26802.18, stdev=11369.48, samples=11 00:08:48.999 iops : min= 1304, max=11196, avg=6700.55, stdev=2842.37, samples=11 00:08:48.999 write: IOPS=7759, BW=30.3MiB/s (31.8MB/s)(152MiB/5014msec); 0 zone resets 00:08:48.999 slat (usec): min=7, max=2994, avg=49.94, stdev=117.75 00:08:48.999 clat (usec): min=474, max=15756, avg=5830.61, stdev=1643.79 00:08:48.999 lat (usec): min=516, max=15780, avg=5880.55, stdev=1657.35 00:08:48.999 clat percentiles (usec): 00:08:48.999 | 1.00th=[ 2409], 5.00th=[ 3032], 10.00th=[ 3425], 20.00th=[ 4080], 00:08:48.999 | 30.00th=[ 4752], 40.00th=[ 5866], 50.00th=[ 6325], 60.00th=[ 6587], 00:08:48.999 | 70.00th=[ 6849], 80.00th=[ 7177], 90.00th=[ 7504], 95.00th=[ 7832], 00:08:48.999 | 99.00th=[ 9765], 99.50th=[10552], 99.90th=[11863], 99.95th=[12387], 00:08:48.999 | 99.99th=[13042] 00:08:48.999 bw ( KiB/s): min= 5552, max=45056, per=86.32%, avg=26794.91, stdev=11103.34, samples=11 00:08:48.999 iops : min= 1388, max=11264, avg=6698.73, stdev=2775.83, samples=11 00:08:48.999 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.02% 00:08:48.999 lat (msec) : 2=0.26%, 4=9.72%, 10=86.65%, 20=3.34% 00:08:48.999 cpu : usr=6.60%, sys=24.46%, ctx=6562, majf=0, minf=145 00:08:48.999 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:08:48.999 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:48.999 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:48.999 issued rwts: total=77061,38907,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:48.999 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:48.999 00:08:48.999 Run status group 0 (all jobs): 00:08:48.999 READ: bw=50.2MiB/s (52.6MB/s), 50.2MiB/s-50.2MiB/s (52.6MB/s-52.6MB/s), io=301MiB (316MB), run=6002-6002msec 00:08:48.999 WRITE: bw=30.3MiB/s (31.8MB/s), 30.3MiB/s-30.3MiB/s (31.8MB/s-31.8MB/s), io=152MiB (159MB), run=5014-5014msec 00:08:48.999 00:08:48.999 Disk stats (read/write): 00:08:48.999 nvme0n1: ios=75529/38907, merge=0/0, ticks=498084/210229, in_queue=708313, util=98.58% 00:08:48.999 21:30:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:48.999 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:08:48.999 21:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:48.999 21:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1219 -- # local i=0 00:08:48.999 21:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:48.999 21:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:48.999 21:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:48.999 21:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:48.999 21:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # return 0 00:08:48.999 21:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:48.999 21:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:08:48.999 21:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:08:48.999 21:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:08:48.999 21:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:08:48.999 21:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:48.999 21:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:08:48.999 21:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:48.999 21:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:08:48.999 21:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:48.999 21:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:48.999 rmmod nvme_tcp 00:08:48.999 rmmod nvme_fabrics 00:08:48.999 rmmod nvme_keyring 00:08:48.999 21:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:48.999 21:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:08:48.999 21:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:08:48.999 21:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n 66376 ']' 00:08:48.999 21:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@490 -- # killprocess 66376 00:08:48.999 21:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@950 -- # '[' -z 66376 ']' 00:08:48.999 21:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@954 -- # kill -0 66376 00:08:48.999 21:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@955 -- # uname 00:08:48.999 21:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:48.999 21:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 66376 00:08:48.999 21:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:48.999 killing process with pid 66376 00:08:48.999 21:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:48.999 21:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@968 -- # echo 'killing process with pid 66376' 00:08:48.999 21:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@969 -- # kill 66376 00:08:48.999 21:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@974 -- # wait 66376 00:08:48.999 21:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:48.999 21:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:48.999 21:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:48.999 21:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:48.999 21:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:48.999 21:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:48.999 21:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:48.999 21:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:48.999 21:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:48.999 00:08:48.999 real 0m19.313s 00:08:48.999 user 1m12.520s 00:08:48.999 sys 0m9.330s 00:08:48.999 21:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:48.999 21:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:48.999 ************************************ 00:08:48.999 END TEST nvmf_target_multipath 00:08:48.999 ************************************ 00:08:48.999 21:30:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:48.999 21:30:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:48.999 21:30:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:48.999 21:30:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:48.999 ************************************ 00:08:48.999 START TEST nvmf_zcopy 00:08:48.999 ************************************ 00:08:48.999 21:30:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:48.999 * Looking for test storage... 00:08:48.999 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:48.999 21:30:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:48.999 21:30:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:08:48.999 21:30:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:48.999 21:30:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:48.999 21:30:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:48.999 21:30:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:48.999 21:30:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:48.999 21:30:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:48.999 21:30:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:48.999 21:30:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:48.999 21:30:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:48.999 21:30:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:48.999 21:30:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:08:48.999 21:30:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:08:48.999 21:30:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:48.999 21:30:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:48.999 21:30:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:48.999 21:30:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:49.000 21:30:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:49.000 21:30:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:49.000 21:30:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:49.000 21:30:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:49.000 21:30:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.000 21:30:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.000 21:30:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.000 21:30:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:08:49.000 21:30:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.000 21:30:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:08:49.000 21:30:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:49.000 21:30:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:49.000 21:30:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:49.000 21:30:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:49.000 21:30:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:49.000 21:30:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:49.000 21:30:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:49.000 21:30:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:49.000 21:30:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:08:49.000 21:30:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:49.000 21:30:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:49.000 21:30:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:49.000 21:30:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:49.000 21:30:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:49.000 21:30:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:49.000 21:30:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:49.000 21:30:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:49.000 21:30:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:49.000 21:30:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:49.000 21:30:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:49.000 21:30:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:49.000 21:30:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:49.000 21:30:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:49.000 21:30:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:49.000 21:30:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:49.000 21:30:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:49.000 21:30:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:49.000 21:30:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:49.000 21:30:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:49.000 21:30:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:49.000 21:30:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:49.000 21:30:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:49.000 21:30:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:49.000 21:30:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:49.000 21:30:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:49.000 21:30:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:49.000 21:30:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:49.000 Cannot find device "nvmf_tgt_br" 00:08:49.000 21:30:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@155 -- # true 00:08:49.000 21:30:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:49.000 Cannot find device "nvmf_tgt_br2" 00:08:49.000 21:30:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # true 00:08:49.000 21:30:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:49.000 21:30:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:49.000 Cannot find device "nvmf_tgt_br" 00:08:49.000 21:30:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # true 00:08:49.000 21:30:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:49.258 Cannot find device "nvmf_tgt_br2" 00:08:49.258 21:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # true 00:08:49.258 21:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:49.258 21:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:49.258 21:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:49.258 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:49.258 21:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:08:49.259 21:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:49.259 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:49.259 21:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:08:49.259 21:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:49.259 21:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:49.259 21:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:49.259 21:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:49.259 21:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:49.259 21:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:49.259 21:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:49.259 21:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:49.259 21:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:49.259 21:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:49.259 21:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:49.259 21:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:49.259 21:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:49.259 21:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:49.259 21:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:49.259 21:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:49.259 21:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:49.259 21:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:49.259 21:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:49.259 21:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:49.259 21:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:49.518 21:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:49.518 21:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:49.518 21:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:49.518 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:49.518 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:08:49.518 00:08:49.518 --- 10.0.0.2 ping statistics --- 00:08:49.518 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:49.518 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:08:49.518 21:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:49.518 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:49.518 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 00:08:49.518 00:08:49.518 --- 10.0.0.3 ping statistics --- 00:08:49.518 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:49.518 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:08:49.518 21:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:49.518 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:49.518 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:08:49.518 00:08:49.518 --- 10.0.0.1 ping statistics --- 00:08:49.518 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:49.518 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:08:49.518 21:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:49.518 21:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@433 -- # return 0 00:08:49.518 21:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:49.518 21:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:49.518 21:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:49.518 21:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:49.518 21:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:49.518 21:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:49.518 21:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:49.518 21:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:08:49.518 21:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:49.518 21:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:49.518 21:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:49.518 21:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=66841 00:08:49.518 21:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:49.518 21:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 66841 00:08:49.518 21:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 66841 ']' 00:08:49.518 21:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:49.518 21:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:49.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:49.518 21:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:49.518 21:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:49.518 21:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:49.518 [2024-07-24 21:30:34.369046] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:08:49.518 [2024-07-24 21:30:34.369127] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:49.518 [2024-07-24 21:30:34.509235] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:49.803 [2024-07-24 21:30:34.626939] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:49.803 [2024-07-24 21:30:34.626995] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:49.803 [2024-07-24 21:30:34.627009] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:49.803 [2024-07-24 21:30:34.627021] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:49.803 [2024-07-24 21:30:34.627031] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:49.803 [2024-07-24 21:30:34.627066] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:49.803 [2024-07-24 21:30:34.690220] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:50.370 21:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:50.370 21:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:08:50.370 21:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:50.370 21:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:50.370 21:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:50.370 21:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:50.370 21:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:08:50.370 21:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:08:50.370 21:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.370 21:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:50.370 [2024-07-24 21:30:35.305513] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:50.370 21:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.370 21:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:50.370 21:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.370 21:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:50.370 21:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.370 21:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:50.370 21:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.370 21:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:50.370 [2024-07-24 21:30:35.321655] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:50.370 21:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.370 21:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:50.370 21:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.370 21:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:50.370 21:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.370 21:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:08:50.370 21:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.370 21:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:50.370 malloc0 00:08:50.370 21:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.370 21:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:08:50.370 21:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.370 21:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:50.370 21:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.370 21:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:08:50.370 21:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:08:50.370 21:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:08:50.370 21:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:08:50.370 21:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:50.370 21:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:50.370 { 00:08:50.370 "params": { 00:08:50.370 "name": "Nvme$subsystem", 00:08:50.370 "trtype": "$TEST_TRANSPORT", 00:08:50.370 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:50.370 "adrfam": "ipv4", 00:08:50.370 "trsvcid": "$NVMF_PORT", 00:08:50.370 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:50.370 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:50.370 "hdgst": ${hdgst:-false}, 00:08:50.370 "ddgst": ${ddgst:-false} 00:08:50.370 }, 00:08:50.370 "method": "bdev_nvme_attach_controller" 00:08:50.370 } 00:08:50.370 EOF 00:08:50.370 )") 00:08:50.370 21:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:08:50.370 21:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:08:50.370 21:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:08:50.370 21:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:50.370 "params": { 00:08:50.370 "name": "Nvme1", 00:08:50.370 "trtype": "tcp", 00:08:50.370 "traddr": "10.0.0.2", 00:08:50.370 "adrfam": "ipv4", 00:08:50.370 "trsvcid": "4420", 00:08:50.370 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:50.370 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:50.370 "hdgst": false, 00:08:50.370 "ddgst": false 00:08:50.370 }, 00:08:50.370 "method": "bdev_nvme_attach_controller" 00:08:50.370 }' 00:08:50.629 [2024-07-24 21:30:35.421521] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:08:50.629 [2024-07-24 21:30:35.421656] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66874 ] 00:08:50.629 [2024-07-24 21:30:35.566368] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:50.888 [2024-07-24 21:30:35.684820] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.888 [2024-07-24 21:30:35.767161] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:51.146 Running I/O for 10 seconds... 00:09:01.126 00:09:01.126 Latency(us) 00:09:01.126 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:01.126 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:01.126 Verification LBA range: start 0x0 length 0x1000 00:09:01.126 Nvme1n1 : 10.01 6548.83 51.16 0.00 0.00 19489.31 2159.71 31457.28 00:09:01.126 =================================================================================================================== 00:09:01.126 Total : 6548.83 51.16 0.00 0.00 19489.31 2159.71 31457.28 00:09:01.385 21:30:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=66990 00:09:01.385 21:30:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:09:01.385 21:30:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:01.385 21:30:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:01.385 21:30:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:01.385 21:30:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:09:01.385 21:30:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:09:01.385 21:30:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:01.385 21:30:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:01.385 { 00:09:01.385 "params": { 00:09:01.385 "name": "Nvme$subsystem", 00:09:01.385 "trtype": "$TEST_TRANSPORT", 00:09:01.385 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:01.385 "adrfam": "ipv4", 00:09:01.385 "trsvcid": "$NVMF_PORT", 00:09:01.385 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:01.385 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:01.385 "hdgst": ${hdgst:-false}, 00:09:01.385 "ddgst": ${ddgst:-false} 00:09:01.385 }, 00:09:01.385 "method": "bdev_nvme_attach_controller" 00:09:01.385 } 00:09:01.385 EOF 00:09:01.385 )") 00:09:01.385 21:30:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:09:01.385 [2024-07-24 21:30:46.146828] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.385 [2024-07-24 21:30:46.146889] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.385 21:30:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:09:01.385 21:30:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:09:01.385 21:30:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:01.385 "params": { 00:09:01.385 "name": "Nvme1", 00:09:01.385 "trtype": "tcp", 00:09:01.385 "traddr": "10.0.0.2", 00:09:01.385 "adrfam": "ipv4", 00:09:01.385 "trsvcid": "4420", 00:09:01.385 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:01.385 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:01.385 "hdgst": false, 00:09:01.385 "ddgst": false 00:09:01.385 }, 00:09:01.385 "method": "bdev_nvme_attach_controller" 00:09:01.385 }' 00:09:01.385 [2024-07-24 21:30:46.158778] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.385 [2024-07-24 21:30:46.158804] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.385 [2024-07-24 21:30:46.170771] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.385 [2024-07-24 21:30:46.170811] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.385 [2024-07-24 21:30:46.180883] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:09:01.385 [2024-07-24 21:30:46.180946] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66990 ] 00:09:01.385 [2024-07-24 21:30:46.182774] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.385 [2024-07-24 21:30:46.182796] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.385 [2024-07-24 21:30:46.194774] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.385 [2024-07-24 21:30:46.194812] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.385 [2024-07-24 21:30:46.206775] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.385 [2024-07-24 21:30:46.206797] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.385 [2024-07-24 21:30:46.218780] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.385 [2024-07-24 21:30:46.218819] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.385 [2024-07-24 21:30:46.230785] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.385 [2024-07-24 21:30:46.230825] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.385 [2024-07-24 21:30:46.243193] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.385 [2024-07-24 21:30:46.243216] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.386 [2024-07-24 21:30:46.255190] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.386 [2024-07-24 21:30:46.255212] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.386 [2024-07-24 21:30:46.267192] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.386 [2024-07-24 21:30:46.267214] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.386 [2024-07-24 21:30:46.279195] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.386 [2024-07-24 21:30:46.279216] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.386 [2024-07-24 21:30:46.291197] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.386 [2024-07-24 21:30:46.291218] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.386 [2024-07-24 21:30:46.303199] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.386 [2024-07-24 21:30:46.303229] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.386 [2024-07-24 21:30:46.308939] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:01.386 [2024-07-24 21:30:46.315201] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.386 [2024-07-24 21:30:46.315246] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.386 [2024-07-24 21:30:46.327204] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.386 [2024-07-24 21:30:46.327232] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.386 [2024-07-24 21:30:46.339206] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.386 [2024-07-24 21:30:46.339231] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.386 [2024-07-24 21:30:46.351208] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.386 [2024-07-24 21:30:46.351233] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.386 [2024-07-24 21:30:46.363210] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.386 [2024-07-24 21:30:46.363231] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.386 [2024-07-24 21:30:46.375212] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.386 [2024-07-24 21:30:46.375234] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.644 [2024-07-24 21:30:46.387219] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.644 [2024-07-24 21:30:46.387238] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.644 [2024-07-24 21:30:46.399218] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.644 [2024-07-24 21:30:46.399237] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.644 [2024-07-24 21:30:46.399875] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:01.644 [2024-07-24 21:30:46.411220] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.644 [2024-07-24 21:30:46.411241] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.644 [2024-07-24 21:30:46.423222] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.644 [2024-07-24 21:30:46.423254] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.644 [2024-07-24 21:30:46.435224] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.644 [2024-07-24 21:30:46.435261] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.644 [2024-07-24 21:30:46.447248] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.644 [2024-07-24 21:30:46.447275] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.644 [2024-07-24 21:30:46.459240] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.644 [2024-07-24 21:30:46.459259] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.644 [2024-07-24 21:30:46.468214] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:01.644 [2024-07-24 21:30:46.471233] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.644 [2024-07-24 21:30:46.471269] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.644 [2024-07-24 21:30:46.483243] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.644 [2024-07-24 21:30:46.483271] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.645 [2024-07-24 21:30:46.495237] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.645 [2024-07-24 21:30:46.495257] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.645 [2024-07-24 21:30:46.507240] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.645 [2024-07-24 21:30:46.507258] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.645 [2024-07-24 21:30:46.519278] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.645 [2024-07-24 21:30:46.519301] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.645 [2024-07-24 21:30:46.531270] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.645 [2024-07-24 21:30:46.531295] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.645 [2024-07-24 21:30:46.543279] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.645 [2024-07-24 21:30:46.543305] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.645 [2024-07-24 21:30:46.555288] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.645 [2024-07-24 21:30:46.555314] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.645 [2024-07-24 21:30:46.567300] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.645 [2024-07-24 21:30:46.567331] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.645 [2024-07-24 21:30:46.579303] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.645 [2024-07-24 21:30:46.579331] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.645 Running I/O for 5 seconds... 00:09:01.645 [2024-07-24 21:30:46.591320] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.645 [2024-07-24 21:30:46.591343] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.645 [2024-07-24 21:30:46.608723] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.645 [2024-07-24 21:30:46.608766] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.645 [2024-07-24 21:30:46.625698] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.645 [2024-07-24 21:30:46.625744] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.645 [2024-07-24 21:30:46.642219] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.645 [2024-07-24 21:30:46.642248] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.903 [2024-07-24 21:30:46.659416] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.903 [2024-07-24 21:30:46.659444] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.903 [2024-07-24 21:30:46.676806] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.903 [2024-07-24 21:30:46.676846] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.903 [2024-07-24 21:30:46.691924] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.903 [2024-07-24 21:30:46.691952] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.903 [2024-07-24 21:30:46.702936] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.903 [2024-07-24 21:30:46.702981] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.903 [2024-07-24 21:30:46.718399] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.903 [2024-07-24 21:30:46.718427] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.903 [2024-07-24 21:30:46.735381] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.903 [2024-07-24 21:30:46.735409] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.903 [2024-07-24 21:30:46.752220] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.903 [2024-07-24 21:30:46.752249] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.903 [2024-07-24 21:30:46.769418] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.903 [2024-07-24 21:30:46.769447] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.903 [2024-07-24 21:30:46.785738] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.903 [2024-07-24 21:30:46.785782] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.903 [2024-07-24 21:30:46.803724] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.903 [2024-07-24 21:30:46.803753] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.903 [2024-07-24 21:30:46.818355] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.903 [2024-07-24 21:30:46.818383] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.903 [2024-07-24 21:30:46.832715] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.903 [2024-07-24 21:30:46.832758] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.903 [2024-07-24 21:30:46.849264] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.903 [2024-07-24 21:30:46.849292] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.903 [2024-07-24 21:30:46.864900] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.903 [2024-07-24 21:30:46.864945] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.903 [2024-07-24 21:30:46.876299] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.903 [2024-07-24 21:30:46.876327] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.903 [2024-07-24 21:30:46.892068] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.903 [2024-07-24 21:30:46.892096] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.162 [2024-07-24 21:30:46.909373] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.162 [2024-07-24 21:30:46.909402] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.162 [2024-07-24 21:30:46.926189] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.162 [2024-07-24 21:30:46.926218] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.162 [2024-07-24 21:30:46.942815] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.162 [2024-07-24 21:30:46.942863] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.162 [2024-07-24 21:30:46.959383] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.162 [2024-07-24 21:30:46.959411] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.162 [2024-07-24 21:30:46.976453] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.162 [2024-07-24 21:30:46.976481] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.162 [2024-07-24 21:30:46.992695] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.162 [2024-07-24 21:30:46.992739] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.162 [2024-07-24 21:30:47.010117] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.162 [2024-07-24 21:30:47.010145] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.162 [2024-07-24 21:30:47.026706] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.162 [2024-07-24 21:30:47.026750] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.162 [2024-07-24 21:30:47.043957] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.162 [2024-07-24 21:30:47.044000] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.162 [2024-07-24 21:30:47.060845] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.162 [2024-07-24 21:30:47.060890] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.162 [2024-07-24 21:30:47.077778] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.162 [2024-07-24 21:30:47.077806] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.162 [2024-07-24 21:30:47.094882] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.162 [2024-07-24 21:30:47.094926] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.162 [2024-07-24 21:30:47.112140] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.162 [2024-07-24 21:30:47.112168] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.162 [2024-07-24 21:30:47.128917] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.162 [2024-07-24 21:30:47.128945] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.162 [2024-07-24 21:30:47.145645] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.162 [2024-07-24 21:30:47.145690] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.421 [2024-07-24 21:30:47.162148] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.421 [2024-07-24 21:30:47.162176] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.421 [2024-07-24 21:30:47.179249] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.421 [2024-07-24 21:30:47.179277] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.421 [2024-07-24 21:30:47.196087] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.421 [2024-07-24 21:30:47.196115] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.421 [2024-07-24 21:30:47.212147] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.421 [2024-07-24 21:30:47.212187] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.421 [2024-07-24 21:30:47.229379] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.421 [2024-07-24 21:30:47.229407] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.421 [2024-07-24 21:30:47.246424] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.421 [2024-07-24 21:30:47.246452] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.421 [2024-07-24 21:30:47.262550] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.421 [2024-07-24 21:30:47.262577] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.421 [2024-07-24 21:30:47.279365] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.421 [2024-07-24 21:30:47.279393] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.421 [2024-07-24 21:30:47.295461] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.421 [2024-07-24 21:30:47.295489] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.421 [2024-07-24 21:30:47.312103] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.421 [2024-07-24 21:30:47.312132] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.421 [2024-07-24 21:30:47.327963] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.421 [2024-07-24 21:30:47.328008] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.421 [2024-07-24 21:30:47.344741] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.421 [2024-07-24 21:30:47.344769] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.421 [2024-07-24 21:30:47.361664] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.421 [2024-07-24 21:30:47.361708] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.421 [2024-07-24 21:30:47.378340] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.421 [2024-07-24 21:30:47.378367] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.421 [2024-07-24 21:30:47.394278] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.421 [2024-07-24 21:30:47.394306] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.421 [2024-07-24 21:30:47.412594] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.421 [2024-07-24 21:30:47.412628] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.680 [2024-07-24 21:30:47.427106] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.680 [2024-07-24 21:30:47.427132] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.680 [2024-07-24 21:30:47.443817] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.680 [2024-07-24 21:30:47.443866] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.680 [2024-07-24 21:30:47.459105] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.680 [2024-07-24 21:30:47.459150] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.680 [2024-07-24 21:30:47.470667] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.680 [2024-07-24 21:30:47.470708] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.680 [2024-07-24 21:30:47.486225] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.680 [2024-07-24 21:30:47.486252] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.680 [2024-07-24 21:30:47.503510] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.680 [2024-07-24 21:30:47.503537] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.680 [2024-07-24 21:30:47.519297] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.680 [2024-07-24 21:30:47.519325] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.680 [2024-07-24 21:30:47.536364] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.680 [2024-07-24 21:30:47.536390] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.680 [2024-07-24 21:30:47.551051] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.680 [2024-07-24 21:30:47.551090] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.680 [2024-07-24 21:30:47.566928] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.680 [2024-07-24 21:30:47.566954] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.680 [2024-07-24 21:30:47.585419] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.680 [2024-07-24 21:30:47.585443] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.680 [2024-07-24 21:30:47.599259] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.680 [2024-07-24 21:30:47.599282] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.680 [2024-07-24 21:30:47.614968] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.680 [2024-07-24 21:30:47.614996] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.680 [2024-07-24 21:30:47.631991] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.680 [2024-07-24 21:30:47.632020] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.680 [2024-07-24 21:30:47.649027] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.680 [2024-07-24 21:30:47.649055] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.680 [2024-07-24 21:30:47.665748] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.680 [2024-07-24 21:30:47.665793] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.939 [2024-07-24 21:30:47.683344] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.939 [2024-07-24 21:30:47.683372] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.939 [2024-07-24 21:30:47.698388] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.939 [2024-07-24 21:30:47.698415] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.939 [2024-07-24 21:30:47.714031] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.939 [2024-07-24 21:30:47.714059] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.939 [2024-07-24 21:30:47.731335] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.939 [2024-07-24 21:30:47.731363] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.939 [2024-07-24 21:30:47.747873] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.939 [2024-07-24 21:30:47.747902] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.939 [2024-07-24 21:30:47.764515] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.939 [2024-07-24 21:30:47.764544] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.939 [2024-07-24 21:30:47.780390] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.939 [2024-07-24 21:30:47.780418] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.939 [2024-07-24 21:30:47.798210] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.939 [2024-07-24 21:30:47.798253] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.939 [2024-07-24 21:30:47.813437] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.939 [2024-07-24 21:30:47.813466] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.939 [2024-07-24 21:30:47.830951] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.939 [2024-07-24 21:30:47.830978] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.939 [2024-07-24 21:30:47.847842] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.939 [2024-07-24 21:30:47.847885] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.939 [2024-07-24 21:30:47.864129] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.939 [2024-07-24 21:30:47.864158] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.939 [2024-07-24 21:30:47.881132] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.939 [2024-07-24 21:30:47.881160] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.939 [2024-07-24 21:30:47.897893] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.939 [2024-07-24 21:30:47.897938] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.939 [2024-07-24 21:30:47.914378] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.939 [2024-07-24 21:30:47.914406] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.939 [2024-07-24 21:30:47.931915] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.939 [2024-07-24 21:30:47.931944] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.197 [2024-07-24 21:30:47.948507] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.197 [2024-07-24 21:30:47.948536] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.197 [2024-07-24 21:30:47.965687] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.197 [2024-07-24 21:30:47.965731] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.197 [2024-07-24 21:30:47.982528] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.197 [2024-07-24 21:30:47.982555] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.197 [2024-07-24 21:30:47.999598] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.197 [2024-07-24 21:30:47.999636] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.197 [2024-07-24 21:30:48.015367] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.197 [2024-07-24 21:30:48.015395] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.197 [2024-07-24 21:30:48.026157] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.197 [2024-07-24 21:30:48.026185] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.197 [2024-07-24 21:30:48.041953] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.197 [2024-07-24 21:30:48.041982] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.197 [2024-07-24 21:30:48.059130] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.197 [2024-07-24 21:30:48.059159] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.197 [2024-07-24 21:30:48.075747] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.197 [2024-07-24 21:30:48.075774] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.197 [2024-07-24 21:30:48.092594] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.197 [2024-07-24 21:30:48.092632] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.197 [2024-07-24 21:30:48.108650] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.197 [2024-07-24 21:30:48.108678] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.197 [2024-07-24 21:30:48.126368] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.197 [2024-07-24 21:30:48.126396] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.197 [2024-07-24 21:30:48.141780] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.197 [2024-07-24 21:30:48.141841] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.197 [2024-07-24 21:30:48.158869] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.197 [2024-07-24 21:30:48.158897] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.197 [2024-07-24 21:30:48.174718] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.197 [2024-07-24 21:30:48.174761] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.197 [2024-07-24 21:30:48.192344] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.197 [2024-07-24 21:30:48.192371] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.455 [2024-07-24 21:30:48.209279] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.455 [2024-07-24 21:30:48.209307] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.455 [2024-07-24 21:30:48.226176] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.455 [2024-07-24 21:30:48.226203] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.455 [2024-07-24 21:30:48.242523] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.455 [2024-07-24 21:30:48.242551] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.455 [2024-07-24 21:30:48.258681] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.455 [2024-07-24 21:30:48.258724] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.455 [2024-07-24 21:30:48.276114] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.455 [2024-07-24 21:30:48.276142] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.455 [2024-07-24 21:30:48.291357] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.455 [2024-07-24 21:30:48.291386] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.455 [2024-07-24 21:30:48.302397] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.455 [2024-07-24 21:30:48.302425] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.455 [2024-07-24 21:30:48.318380] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.455 [2024-07-24 21:30:48.318408] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.455 [2024-07-24 21:30:48.334956] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.455 [2024-07-24 21:30:48.334986] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.455 [2024-07-24 21:30:48.352185] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.455 [2024-07-24 21:30:48.352221] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.455 [2024-07-24 21:30:48.369001] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.455 [2024-07-24 21:30:48.369030] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.455 [2024-07-24 21:30:48.384988] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.455 [2024-07-24 21:30:48.385015] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.455 [2024-07-24 21:30:48.402476] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.455 [2024-07-24 21:30:48.402504] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.455 [2024-07-24 21:30:48.419225] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.455 [2024-07-24 21:30:48.419253] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.455 [2024-07-24 21:30:48.436593] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.455 [2024-07-24 21:30:48.436647] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.455 [2024-07-24 21:30:48.451641] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.455 [2024-07-24 21:30:48.451709] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.714 [2024-07-24 21:30:48.467167] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.714 [2024-07-24 21:30:48.467194] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.714 [2024-07-24 21:30:48.483840] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.714 [2024-07-24 21:30:48.483884] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.714 [2024-07-24 21:30:48.500987] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.714 [2024-07-24 21:30:48.501015] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.714 [2024-07-24 21:30:48.517856] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.714 [2024-07-24 21:30:48.517900] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.714 [2024-07-24 21:30:48.534720] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.714 [2024-07-24 21:30:48.534763] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.714 [2024-07-24 21:30:48.551306] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.714 [2024-07-24 21:30:48.551353] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.714 [2024-07-24 21:30:48.568623] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.714 [2024-07-24 21:30:48.568680] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.714 [2024-07-24 21:30:48.585134] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.714 [2024-07-24 21:30:48.585163] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.714 [2024-07-24 21:30:48.601281] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.714 [2024-07-24 21:30:48.601310] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.714 [2024-07-24 21:30:48.617822] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.714 [2024-07-24 21:30:48.617848] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.714 [2024-07-24 21:30:48.635251] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.714 [2024-07-24 21:30:48.635275] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.714 [2024-07-24 21:30:48.651954] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.714 [2024-07-24 21:30:48.651980] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.714 [2024-07-24 21:30:48.669918] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.714 [2024-07-24 21:30:48.669945] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.714 [2024-07-24 21:30:48.685541] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.714 [2024-07-24 21:30:48.685567] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.714 [2024-07-24 21:30:48.702302] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.714 [2024-07-24 21:30:48.702325] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.973 [2024-07-24 21:30:48.718278] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.973 [2024-07-24 21:30:48.718306] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.973 [2024-07-24 21:30:48.735829] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.973 [2024-07-24 21:30:48.735875] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.973 [2024-07-24 21:30:48.750712] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.973 [2024-07-24 21:30:48.750756] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.973 [2024-07-24 21:30:48.766736] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.973 [2024-07-24 21:30:48.766778] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.973 [2024-07-24 21:30:48.783619] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.973 [2024-07-24 21:30:48.783655] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.973 [2024-07-24 21:30:48.800049] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.973 [2024-07-24 21:30:48.800094] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.973 [2024-07-24 21:30:48.816960] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.973 [2024-07-24 21:30:48.817004] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.973 [2024-07-24 21:30:48.833444] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.974 [2024-07-24 21:30:48.833472] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.974 [2024-07-24 21:30:48.850213] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.974 [2024-07-24 21:30:48.850241] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.974 [2024-07-24 21:30:48.867342] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.974 [2024-07-24 21:30:48.867371] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.974 [2024-07-24 21:30:48.882152] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.974 [2024-07-24 21:30:48.882180] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.974 [2024-07-24 21:30:48.896416] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.974 [2024-07-24 21:30:48.896444] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.974 [2024-07-24 21:30:48.911717] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.974 [2024-07-24 21:30:48.911761] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.974 [2024-07-24 21:30:48.928730] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.974 [2024-07-24 21:30:48.928757] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.974 [2024-07-24 21:30:48.945532] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.974 [2024-07-24 21:30:48.945560] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.974 [2024-07-24 21:30:48.962342] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.974 [2024-07-24 21:30:48.962369] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.233 [2024-07-24 21:30:48.978787] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.233 [2024-07-24 21:30:48.978830] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.233 [2024-07-24 21:30:48.989686] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.233 [2024-07-24 21:30:48.989731] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.233 [2024-07-24 21:30:49.005335] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.233 [2024-07-24 21:30:49.005364] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.233 [2024-07-24 21:30:49.022750] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.233 [2024-07-24 21:30:49.022777] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.233 [2024-07-24 21:30:49.039526] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.233 [2024-07-24 21:30:49.039554] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.233 [2024-07-24 21:30:49.056697] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.233 [2024-07-24 21:30:49.056740] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.233 [2024-07-24 21:30:49.072183] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.233 [2024-07-24 21:30:49.072222] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.233 [2024-07-24 21:30:49.088437] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.233 [2024-07-24 21:30:49.088466] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.233 [2024-07-24 21:30:49.105731] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.233 [2024-07-24 21:30:49.105775] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.233 [2024-07-24 21:30:49.121080] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.233 [2024-07-24 21:30:49.121108] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.233 [2024-07-24 21:30:49.138143] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.233 [2024-07-24 21:30:49.138170] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.233 [2024-07-24 21:30:49.155374] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.233 [2024-07-24 21:30:49.155402] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.233 [2024-07-24 21:30:49.170366] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.233 [2024-07-24 21:30:49.170393] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.233 [2024-07-24 21:30:49.181111] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.233 [2024-07-24 21:30:49.181139] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.233 [2024-07-24 21:30:49.197392] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.233 [2024-07-24 21:30:49.197420] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.233 [2024-07-24 21:30:49.214306] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.233 [2024-07-24 21:30:49.214334] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.233 [2024-07-24 21:30:49.231351] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.233 [2024-07-24 21:30:49.231380] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.492 [2024-07-24 21:30:49.247188] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.492 [2024-07-24 21:30:49.247215] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.492 [2024-07-24 21:30:49.264407] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.492 [2024-07-24 21:30:49.264435] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.492 [2024-07-24 21:30:49.281343] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.492 [2024-07-24 21:30:49.281372] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.492 [2024-07-24 21:30:49.297669] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.492 [2024-07-24 21:30:49.297712] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.493 [2024-07-24 21:30:49.315089] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.493 [2024-07-24 21:30:49.315117] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.493 [2024-07-24 21:30:49.330530] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.493 [2024-07-24 21:30:49.330558] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.493 [2024-07-24 21:30:49.348084] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.493 [2024-07-24 21:30:49.348111] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.493 [2024-07-24 21:30:49.365113] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.493 [2024-07-24 21:30:49.365140] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.493 [2024-07-24 21:30:49.382497] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.493 [2024-07-24 21:30:49.382522] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.493 [2024-07-24 21:30:49.396962] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.493 [2024-07-24 21:30:49.396988] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.493 [2024-07-24 21:30:49.413184] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.493 [2024-07-24 21:30:49.413218] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.493 [2024-07-24 21:30:49.428286] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.493 [2024-07-24 21:30:49.428325] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.493 [2024-07-24 21:30:49.442880] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.493 [2024-07-24 21:30:49.442908] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.493 [2024-07-24 21:30:49.452190] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.493 [2024-07-24 21:30:49.452237] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.493 [2024-07-24 21:30:49.467951] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.493 [2024-07-24 21:30:49.468008] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.493 [2024-07-24 21:30:49.477414] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.493 [2024-07-24 21:30:49.477439] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.493 [2024-07-24 21:30:49.492434] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.493 [2024-07-24 21:30:49.492463] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.752 [2024-07-24 21:30:49.508958] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.752 [2024-07-24 21:30:49.509004] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.752 [2024-07-24 21:30:49.525532] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.752 [2024-07-24 21:30:49.525556] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.752 [2024-07-24 21:30:49.542243] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.752 [2024-07-24 21:30:49.542268] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.752 [2024-07-24 21:30:49.558263] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.752 [2024-07-24 21:30:49.558288] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.752 [2024-07-24 21:30:49.575317] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.752 [2024-07-24 21:30:49.575342] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.752 [2024-07-24 21:30:49.592455] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.752 [2024-07-24 21:30:49.592481] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.752 [2024-07-24 21:30:49.607626] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.752 [2024-07-24 21:30:49.607660] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.752 [2024-07-24 21:30:49.618691] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.752 [2024-07-24 21:30:49.618716] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.752 [2024-07-24 21:30:49.633429] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.752 [2024-07-24 21:30:49.633455] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.752 [2024-07-24 21:30:49.650820] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.752 [2024-07-24 21:30:49.650854] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.752 [2024-07-24 21:30:49.667086] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.752 [2024-07-24 21:30:49.667111] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.752 [2024-07-24 21:30:49.683974] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.752 [2024-07-24 21:30:49.683999] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.752 [2024-07-24 21:30:49.699812] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.752 [2024-07-24 21:30:49.699837] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.752 [2024-07-24 21:30:49.717146] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.752 [2024-07-24 21:30:49.717172] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.752 [2024-07-24 21:30:49.733551] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.752 [2024-07-24 21:30:49.733578] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.011 [2024-07-24 21:30:49.751430] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.011 [2024-07-24 21:30:49.751455] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.011 [2024-07-24 21:30:49.767135] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.011 [2024-07-24 21:30:49.767176] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.011 [2024-07-24 21:30:49.784199] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.011 [2024-07-24 21:30:49.784228] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.011 [2024-07-24 21:30:49.800428] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.011 [2024-07-24 21:30:49.800452] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.011 [2024-07-24 21:30:49.818644] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.011 [2024-07-24 21:30:49.818701] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.011 [2024-07-24 21:30:49.833302] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.011 [2024-07-24 21:30:49.833327] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.011 [2024-07-24 21:30:49.847979] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.011 [2024-07-24 21:30:49.848005] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.011 [2024-07-24 21:30:49.859265] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.011 [2024-07-24 21:30:49.859291] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.011 [2024-07-24 21:30:49.874620] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.011 [2024-07-24 21:30:49.874654] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.011 [2024-07-24 21:30:49.891773] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.011 [2024-07-24 21:30:49.891797] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.011 [2024-07-24 21:30:49.908392] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.011 [2024-07-24 21:30:49.908417] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.011 [2024-07-24 21:30:49.925230] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.011 [2024-07-24 21:30:49.925263] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.011 [2024-07-24 21:30:49.942205] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.011 [2024-07-24 21:30:49.942230] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.011 [2024-07-24 21:30:49.959253] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.011 [2024-07-24 21:30:49.959277] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.011 [2024-07-24 21:30:49.975078] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.011 [2024-07-24 21:30:49.975103] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.011 [2024-07-24 21:30:49.991820] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.011 [2024-07-24 21:30:49.991854] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.011 [2024-07-24 21:30:50.008600] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.011 [2024-07-24 21:30:50.008649] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.270 [2024-07-24 21:30:50.025082] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.270 [2024-07-24 21:30:50.025108] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.270 [2024-07-24 21:30:50.042187] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.270 [2024-07-24 21:30:50.042212] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.270 [2024-07-24 21:30:50.058533] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.270 [2024-07-24 21:30:50.058559] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.270 [2024-07-24 21:30:50.075298] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.270 [2024-07-24 21:30:50.075323] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.270 [2024-07-24 21:30:50.091586] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.270 [2024-07-24 21:30:50.091611] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.271 [2024-07-24 21:30:50.108541] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.271 [2024-07-24 21:30:50.108566] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.271 [2024-07-24 21:30:50.125163] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.271 [2024-07-24 21:30:50.125188] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.271 [2024-07-24 21:30:50.141905] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.271 [2024-07-24 21:30:50.141934] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.271 [2024-07-24 21:30:50.158376] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.271 [2024-07-24 21:30:50.158400] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.271 [2024-07-24 21:30:50.175031] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.271 [2024-07-24 21:30:50.175056] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.271 [2024-07-24 21:30:50.191457] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.271 [2024-07-24 21:30:50.191482] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.271 [2024-07-24 21:30:50.208691] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.271 [2024-07-24 21:30:50.208727] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.271 [2024-07-24 21:30:50.225579] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.271 [2024-07-24 21:30:50.225604] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.271 [2024-07-24 21:30:50.242505] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.271 [2024-07-24 21:30:50.242530] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.271 [2024-07-24 21:30:50.258475] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.271 [2024-07-24 21:30:50.258500] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.530 [2024-07-24 21:30:50.275325] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.530 [2024-07-24 21:30:50.275349] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.530 [2024-07-24 21:30:50.291760] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.530 [2024-07-24 21:30:50.291783] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.530 [2024-07-24 21:30:50.308462] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.530 [2024-07-24 21:30:50.308487] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.530 [2024-07-24 21:30:50.324943] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.530 [2024-07-24 21:30:50.324969] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.530 [2024-07-24 21:30:50.341877] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.530 [2024-07-24 21:30:50.341903] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.530 [2024-07-24 21:30:50.358645] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.530 [2024-07-24 21:30:50.358684] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.530 [2024-07-24 21:30:50.375076] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.530 [2024-07-24 21:30:50.375102] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.530 [2024-07-24 21:30:50.392542] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.530 [2024-07-24 21:30:50.392567] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.530 [2024-07-24 21:30:50.408844] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.530 [2024-07-24 21:30:50.408881] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.530 [2024-07-24 21:30:50.425814] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.530 [2024-07-24 21:30:50.425840] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.530 [2024-07-24 21:30:50.442694] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.530 [2024-07-24 21:30:50.442732] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.530 [2024-07-24 21:30:50.459149] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.530 [2024-07-24 21:30:50.459174] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.530 [2024-07-24 21:30:50.476174] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.530 [2024-07-24 21:30:50.476198] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.530 [2024-07-24 21:30:50.491863] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.530 [2024-07-24 21:30:50.491888] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.530 [2024-07-24 21:30:50.509132] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.530 [2024-07-24 21:30:50.509158] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.530 [2024-07-24 21:30:50.525671] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.530 [2024-07-24 21:30:50.525696] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.789 [2024-07-24 21:30:50.542066] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.789 [2024-07-24 21:30:50.542092] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.789 [2024-07-24 21:30:50.558507] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.789 [2024-07-24 21:30:50.558532] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.789 [2024-07-24 21:30:50.575075] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.789 [2024-07-24 21:30:50.575100] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.789 [2024-07-24 21:30:50.591944] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.789 [2024-07-24 21:30:50.591977] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.789 [2024-07-24 21:30:50.608891] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.789 [2024-07-24 21:30:50.608917] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.789 [2024-07-24 21:30:50.624986] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.789 [2024-07-24 21:30:50.625012] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.789 [2024-07-24 21:30:50.641382] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.789 [2024-07-24 21:30:50.641407] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.789 [2024-07-24 21:30:50.658065] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.789 [2024-07-24 21:30:50.658090] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.789 [2024-07-24 21:30:50.674953] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.789 [2024-07-24 21:30:50.674978] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.789 [2024-07-24 21:30:50.691125] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.789 [2024-07-24 21:30:50.691150] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.789 [2024-07-24 21:30:50.708085] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.789 [2024-07-24 21:30:50.708110] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.789 [2024-07-24 21:30:50.724390] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.789 [2024-07-24 21:30:50.724415] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.790 [2024-07-24 21:30:50.741174] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.790 [2024-07-24 21:30:50.741208] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.790 [2024-07-24 21:30:50.757877] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.790 [2024-07-24 21:30:50.757906] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.790 [2024-07-24 21:30:50.774948] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.790 [2024-07-24 21:30:50.774973] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.049 [2024-07-24 21:30:50.791680] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.049 [2024-07-24 21:30:50.791705] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.049 [2024-07-24 21:30:50.808675] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.049 [2024-07-24 21:30:50.808700] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.049 [2024-07-24 21:30:50.825842] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.049 [2024-07-24 21:30:50.825879] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.049 [2024-07-24 21:30:50.841782] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.049 [2024-07-24 21:30:50.841807] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.049 [2024-07-24 21:30:50.852838] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.049 [2024-07-24 21:30:50.852870] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.049 [2024-07-24 21:30:50.868764] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.049 [2024-07-24 21:30:50.868789] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.049 [2024-07-24 21:30:50.885740] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.049 [2024-07-24 21:30:50.885798] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.049 [2024-07-24 21:30:50.900147] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.049 [2024-07-24 21:30:50.900177] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.049 [2024-07-24 21:30:50.915460] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.049 [2024-07-24 21:30:50.915485] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.049 [2024-07-24 21:30:50.926011] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.049 [2024-07-24 21:30:50.926035] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.049 [2024-07-24 21:30:50.940981] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.049 [2024-07-24 21:30:50.941023] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.049 [2024-07-24 21:30:50.958133] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.049 [2024-07-24 21:30:50.958166] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.049 [2024-07-24 21:30:50.974211] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.049 [2024-07-24 21:30:50.974241] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.049 [2024-07-24 21:30:50.990443] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.049 [2024-07-24 21:30:50.990468] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.049 [2024-07-24 21:30:51.006838] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.049 [2024-07-24 21:30:51.006870] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.049 [2024-07-24 21:30:51.023109] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.049 [2024-07-24 21:30:51.023134] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.049 [2024-07-24 21:30:51.033964] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.049 [2024-07-24 21:30:51.033990] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.308 [2024-07-24 21:30:51.049982] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.308 [2024-07-24 21:30:51.050008] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.308 [2024-07-24 21:30:51.066565] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.308 [2024-07-24 21:30:51.066591] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.308 [2024-07-24 21:30:51.082931] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.308 [2024-07-24 21:30:51.082957] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.308 [2024-07-24 21:30:51.099142] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.308 [2024-07-24 21:30:51.099167] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.308 [2024-07-24 21:30:51.109770] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.308 [2024-07-24 21:30:51.109796] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.308 [2024-07-24 21:30:51.125768] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.308 [2024-07-24 21:30:51.125793] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.308 [2024-07-24 21:30:51.142372] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.308 [2024-07-24 21:30:51.142397] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.308 [2024-07-24 21:30:51.159279] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.308 [2024-07-24 21:30:51.159304] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.308 [2024-07-24 21:30:51.175747] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.308 [2024-07-24 21:30:51.175785] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.308 [2024-07-24 21:30:51.192872] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.308 [2024-07-24 21:30:51.192897] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.308 [2024-07-24 21:30:51.208693] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.308 [2024-07-24 21:30:51.208718] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.308 [2024-07-24 21:30:51.219316] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.308 [2024-07-24 21:30:51.219341] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.308 [2024-07-24 21:30:51.234755] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.308 [2024-07-24 21:30:51.234780] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.308 [2024-07-24 21:30:51.251698] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.308 [2024-07-24 21:30:51.251722] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.308 [2024-07-24 21:30:51.267669] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.308 [2024-07-24 21:30:51.267692] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.308 [2024-07-24 21:30:51.284404] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.308 [2024-07-24 21:30:51.284429] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.308 [2024-07-24 21:30:51.300967] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.308 [2024-07-24 21:30:51.300992] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.567 [2024-07-24 21:30:51.317528] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.567 [2024-07-24 21:30:51.317553] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.567 [2024-07-24 21:30:51.334069] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.567 [2024-07-24 21:30:51.334094] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.567 [2024-07-24 21:30:51.350095] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.567 [2024-07-24 21:30:51.350120] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.567 [2024-07-24 21:30:51.364436] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.567 [2024-07-24 21:30:51.364461] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.567 [2024-07-24 21:30:51.380289] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.567 [2024-07-24 21:30:51.380314] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.567 [2024-07-24 21:30:51.396254] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.567 [2024-07-24 21:30:51.396279] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.567 [2024-07-24 21:30:51.413185] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.567 [2024-07-24 21:30:51.413210] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.567 [2024-07-24 21:30:51.430197] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.567 [2024-07-24 21:30:51.430226] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.567 [2024-07-24 21:30:51.446429] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.567 [2024-07-24 21:30:51.446454] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.567 [2024-07-24 21:30:51.462705] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.567 [2024-07-24 21:30:51.462747] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.567 [2024-07-24 21:30:51.479540] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.567 [2024-07-24 21:30:51.479566] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.567 [2024-07-24 21:30:51.496234] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.567 [2024-07-24 21:30:51.496259] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.567 [2024-07-24 21:30:51.511126] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.567 [2024-07-24 21:30:51.511151] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.567 [2024-07-24 21:30:51.519709] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.567 [2024-07-24 21:30:51.519747] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.567 [2024-07-24 21:30:51.534366] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.567 [2024-07-24 21:30:51.534391] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.567 [2024-07-24 21:30:51.550280] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.567 [2024-07-24 21:30:51.550305] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.567 [2024-07-24 21:30:51.566589] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.568 [2024-07-24 21:30:51.566614] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.827 [2024-07-24 21:30:51.583534] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.827 [2024-07-24 21:30:51.583559] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.827 [2024-07-24 21:30:51.595219] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.827 [2024-07-24 21:30:51.595248] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.827 00:09:06.827 Latency(us) 00:09:06.827 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:06.827 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:06.827 Nvme1n1 : 5.01 13895.44 108.56 0.00 0.00 9200.08 3902.37 19303.33 00:09:06.827 =================================================================================================================== 00:09:06.827 Total : 13895.44 108.56 0.00 0.00 9200.08 3902.37 19303.33 00:09:06.827 [2024-07-24 21:30:51.607218] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.827 [2024-07-24 21:30:51.607253] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.827 [2024-07-24 21:30:51.619218] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.827 [2024-07-24 21:30:51.619242] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.827 [2024-07-24 21:30:51.631216] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.827 [2024-07-24 21:30:51.631254] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.827 [2024-07-24 21:30:51.643219] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.827 [2024-07-24 21:30:51.643253] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.827 [2024-07-24 21:30:51.655221] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.827 [2024-07-24 21:30:51.655263] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.827 [2024-07-24 21:30:51.667224] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.827 [2024-07-24 21:30:51.667259] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.827 [2024-07-24 21:30:51.679228] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.827 [2024-07-24 21:30:51.679258] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.827 [2024-07-24 21:30:51.691232] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.827 [2024-07-24 21:30:51.691252] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.827 [2024-07-24 21:30:51.703234] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.827 [2024-07-24 21:30:51.703254] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.827 [2024-07-24 21:30:51.715237] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.827 [2024-07-24 21:30:51.715269] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.827 [2024-07-24 21:30:51.727240] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.827 [2024-07-24 21:30:51.727278] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.827 [2024-07-24 21:30:51.739242] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.827 [2024-07-24 21:30:51.739262] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.827 [2024-07-24 21:30:51.751243] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.827 [2024-07-24 21:30:51.751275] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.827 [2024-07-24 21:30:51.763247] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.827 [2024-07-24 21:30:51.763279] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.827 [2024-07-24 21:30:51.775262] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.827 [2024-07-24 21:30:51.775294] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.827 [2024-07-24 21:30:51.787253] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.827 [2024-07-24 21:30:51.787273] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.827 [2024-07-24 21:30:51.799260] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.827 [2024-07-24 21:30:51.799293] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.827 [2024-07-24 21:30:51.811268] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.827 [2024-07-24 21:30:51.811288] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.827 [2024-07-24 21:30:51.823263] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.827 [2024-07-24 21:30:51.823301] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.086 [2024-07-24 21:30:51.835265] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.086 [2024-07-24 21:30:51.835285] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.086 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (66990) - No such process 00:09:07.086 21:30:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 66990 00:09:07.086 21:30:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:07.086 21:30:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.086 21:30:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:07.086 21:30:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.086 21:30:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:07.086 21:30:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.086 21:30:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:07.086 delay0 00:09:07.086 21:30:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.086 21:30:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:07.086 21:30:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.086 21:30:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:07.086 21:30:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.086 21:30:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:09:07.087 [2024-07-24 21:30:52.045793] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:13.649 Initializing NVMe Controllers 00:09:13.649 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:13.649 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:13.649 Initialization complete. Launching workers. 00:09:13.649 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 109 00:09:13.649 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 396, failed to submit 33 00:09:13.649 success 242, unsuccess 154, failed 0 00:09:13.649 21:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:13.649 21:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:09:13.649 21:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:13.649 21:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:09:13.649 21:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:13.649 21:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:09:13.649 21:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:13.649 21:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:13.649 rmmod nvme_tcp 00:09:13.649 rmmod nvme_fabrics 00:09:13.649 rmmod nvme_keyring 00:09:13.649 21:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:13.649 21:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:09:13.649 21:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:09:13.649 21:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 66841 ']' 00:09:13.649 21:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 66841 00:09:13.649 21:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 66841 ']' 00:09:13.649 21:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 66841 00:09:13.649 21:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:09:13.649 21:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:13.649 21:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 66841 00:09:13.649 21:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:13.649 21:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:13.649 killing process with pid 66841 00:09:13.649 21:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 66841' 00:09:13.649 21:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 66841 00:09:13.649 21:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 66841 00:09:13.649 21:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:13.649 21:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:13.649 21:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:13.649 21:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:13.649 21:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:13.649 21:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:13.649 21:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:13.649 21:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:13.649 21:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:13.649 00:09:13.649 real 0m24.714s 00:09:13.649 user 0m39.602s 00:09:13.649 sys 0m7.468s 00:09:13.649 21:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:13.649 21:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:13.649 ************************************ 00:09:13.649 END TEST nvmf_zcopy 00:09:13.649 ************************************ 00:09:13.649 21:30:58 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:13.649 21:30:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:13.649 21:30:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:13.649 21:30:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:13.649 ************************************ 00:09:13.649 START TEST nvmf_nmic 00:09:13.650 ************************************ 00:09:13.650 21:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:13.908 * Looking for test storage... 00:09:13.908 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:13.908 21:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:13.908 21:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:13.908 21:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:13.908 21:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:13.908 21:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:13.908 21:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:13.908 21:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:13.908 21:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:13.908 21:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:13.908 21:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:13.908 21:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:13.908 21:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:13.908 21:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:09:13.908 21:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:09:13.908 21:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:13.908 21:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:13.908 21:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:13.908 21:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:13.909 21:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:13.909 21:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:13.909 21:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:13.909 21:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:13.909 21:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.909 21:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.909 21:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.909 21:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:13.909 21:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.909 21:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:09:13.909 21:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:13.909 21:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:13.909 21:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:13.909 21:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:13.909 21:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:13.909 21:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:13.909 21:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:13.909 21:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:13.909 21:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:13.909 21:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:13.909 21:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:13.909 21:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:13.909 21:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:13.909 21:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:13.909 21:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:13.909 21:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:13.909 21:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:13.909 21:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:13.909 21:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:13.909 21:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:13.909 21:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:13.909 21:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:13.909 21:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:13.909 21:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:13.909 21:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:13.909 21:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:13.909 21:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:13.909 21:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:13.909 21:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:13.909 21:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:13.909 21:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:13.909 21:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:13.909 21:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:13.909 21:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:13.909 21:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:13.909 21:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:13.909 21:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:13.909 21:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:13.909 21:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:13.909 Cannot find device "nvmf_tgt_br" 00:09:13.909 21:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@155 -- # true 00:09:13.909 21:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:13.909 Cannot find device "nvmf_tgt_br2" 00:09:13.909 21:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # true 00:09:13.909 21:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:13.909 21:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:13.909 Cannot find device "nvmf_tgt_br" 00:09:13.909 21:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # true 00:09:13.909 21:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:13.909 Cannot find device "nvmf_tgt_br2" 00:09:13.909 21:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # true 00:09:13.909 21:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:13.909 21:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:13.909 21:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:13.909 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:13.909 21:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:09:13.909 21:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:13.909 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:13.909 21:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:09:13.909 21:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:13.909 21:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:13.909 21:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:13.909 21:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:13.909 21:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:13.909 21:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:14.168 21:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:14.168 21:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:14.168 21:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:14.168 21:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:14.168 21:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:14.168 21:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:14.168 21:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:14.168 21:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:14.168 21:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:14.168 21:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:14.168 21:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:14.168 21:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:14.168 21:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:14.168 21:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:14.168 21:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:14.168 21:30:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:14.168 21:30:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:14.168 21:30:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:14.168 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:14.168 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:09:14.168 00:09:14.168 --- 10.0.0.2 ping statistics --- 00:09:14.168 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:14.168 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:09:14.168 21:30:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:14.168 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:14.168 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.037 ms 00:09:14.169 00:09:14.169 --- 10.0.0.3 ping statistics --- 00:09:14.169 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:14.169 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:09:14.169 21:30:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:14.169 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:14.169 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:09:14.169 00:09:14.169 --- 10.0.0.1 ping statistics --- 00:09:14.169 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:14.169 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:09:14.169 21:30:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:14.169 21:30:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@433 -- # return 0 00:09:14.169 21:30:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:14.169 21:30:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:14.169 21:30:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:14.169 21:30:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:14.169 21:30:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:14.169 21:30:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:14.169 21:30:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:14.169 21:30:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:14.169 21:30:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:14.169 21:30:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:14.169 21:30:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:14.169 21:30:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=67313 00:09:14.169 21:30:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 67313 00:09:14.169 21:30:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 67313 ']' 00:09:14.169 21:30:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:14.169 21:30:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:14.169 21:30:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:14.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:14.169 21:30:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:14.169 21:30:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:14.169 21:30:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:14.169 [2024-07-24 21:30:59.112241] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:09:14.169 [2024-07-24 21:30:59.112326] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:14.427 [2024-07-24 21:30:59.254611] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:14.427 [2024-07-24 21:30:59.383237] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:14.427 [2024-07-24 21:30:59.383317] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:14.427 [2024-07-24 21:30:59.383332] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:14.427 [2024-07-24 21:30:59.383344] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:14.427 [2024-07-24 21:30:59.383353] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:14.427 [2024-07-24 21:30:59.383708] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:14.427 [2024-07-24 21:30:59.383789] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:14.427 [2024-07-24 21:30:59.384317] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:14.427 [2024-07-24 21:30:59.384371] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:14.685 [2024-07-24 21:30:59.462082] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:15.252 21:31:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:15.252 21:31:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:09:15.252 21:31:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:15.252 21:31:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:15.252 21:31:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:15.252 21:31:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:15.252 21:31:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:15.252 21:31:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.252 21:31:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:15.252 [2024-07-24 21:31:00.160228] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:15.252 21:31:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.252 21:31:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:15.252 21:31:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.252 21:31:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:15.252 Malloc0 00:09:15.252 21:31:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.252 21:31:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:15.252 21:31:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.252 21:31:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:15.252 21:31:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.252 21:31:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:15.252 21:31:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.252 21:31:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:15.252 21:31:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.252 21:31:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:15.252 21:31:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.252 21:31:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:15.252 [2024-07-24 21:31:00.234674] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:15.252 21:31:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.252 test case1: single bdev can't be used in multiple subsystems 00:09:15.252 21:31:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:15.252 21:31:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:15.252 21:31:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.252 21:31:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:15.252 21:31:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.252 21:31:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:15.252 21:31:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.252 21:31:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:15.511 21:31:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.511 21:31:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:09:15.511 21:31:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:15.511 21:31:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.511 21:31:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:15.511 [2024-07-24 21:31:00.258494] bdev.c:8111:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:15.511 [2024-07-24 21:31:00.258539] subsystem.c:2087:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:15.511 [2024-07-24 21:31:00.258551] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.511 request: 00:09:15.511 { 00:09:15.511 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:15.511 "namespace": { 00:09:15.511 "bdev_name": "Malloc0", 00:09:15.511 "no_auto_visible": false 00:09:15.511 }, 00:09:15.511 "method": "nvmf_subsystem_add_ns", 00:09:15.511 "req_id": 1 00:09:15.511 } 00:09:15.511 Got JSON-RPC error response 00:09:15.511 response: 00:09:15.511 { 00:09:15.511 "code": -32602, 00:09:15.511 "message": "Invalid parameters" 00:09:15.511 } 00:09:15.511 21:31:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:15.511 21:31:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:09:15.511 21:31:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:15.511 Adding namespace failed - expected result. 00:09:15.511 21:31:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:15.511 test case2: host connect to nvmf target in multiple paths 00:09:15.511 21:31:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:15.511 21:31:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:09:15.511 21:31:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.511 21:31:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:15.511 [2024-07-24 21:31:00.270588] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:09:15.511 21:31:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.511 21:31:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --hostid=987211d5-ddc7-4d0a-8ba2-cf43288d1158 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:15.511 21:31:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --hostid=987211d5-ddc7-4d0a-8ba2-cf43288d1158 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:09:15.770 21:31:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:15.770 21:31:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:09:15.770 21:31:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:15.770 21:31:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:15.770 21:31:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:09:17.673 21:31:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:17.673 21:31:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:17.673 21:31:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:17.673 21:31:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:17.673 21:31:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:17.673 21:31:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:09:17.673 21:31:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:17.673 [global] 00:09:17.673 thread=1 00:09:17.673 invalidate=1 00:09:17.673 rw=write 00:09:17.673 time_based=1 00:09:17.673 runtime=1 00:09:17.673 ioengine=libaio 00:09:17.673 direct=1 00:09:17.673 bs=4096 00:09:17.673 iodepth=1 00:09:17.673 norandommap=0 00:09:17.673 numjobs=1 00:09:17.673 00:09:17.673 verify_dump=1 00:09:17.673 verify_backlog=512 00:09:17.673 verify_state_save=0 00:09:17.673 do_verify=1 00:09:17.673 verify=crc32c-intel 00:09:17.673 [job0] 00:09:17.673 filename=/dev/nvme0n1 00:09:17.673 Could not set queue depth (nvme0n1) 00:09:17.932 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:17.932 fio-3.35 00:09:17.932 Starting 1 thread 00:09:18.869 00:09:18.869 job0: (groupid=0, jobs=1): err= 0: pid=67399: Wed Jul 24 21:31:03 2024 00:09:18.869 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:09:18.869 slat (nsec): min=10189, max=59404, avg=13622.14, stdev=4809.46 00:09:18.869 clat (usec): min=145, max=6812, avg=263.38, stdev=239.00 00:09:18.869 lat (usec): min=156, max=6871, avg=277.00, stdev=240.32 00:09:18.869 clat percentiles (usec): 00:09:18.869 | 1.00th=[ 169], 5.00th=[ 192], 10.00th=[ 204], 20.00th=[ 223], 00:09:18.869 | 30.00th=[ 233], 40.00th=[ 241], 50.00th=[ 249], 60.00th=[ 258], 00:09:18.869 | 70.00th=[ 265], 80.00th=[ 277], 90.00th=[ 297], 95.00th=[ 318], 00:09:18.869 | 99.00th=[ 392], 99.50th=[ 693], 99.90th=[ 3621], 99.95th=[ 6718], 00:09:18.869 | 99.99th=[ 6783] 00:09:18.869 write: IOPS=2411, BW=9646KiB/s (9878kB/s)(9656KiB/1001msec); 0 zone resets 00:09:18.869 slat (usec): min=14, max=107, avg=21.44, stdev= 8.76 00:09:18.869 clat (usec): min=85, max=7529, avg=154.83, stdev=167.77 00:09:18.869 lat (usec): min=100, max=7550, avg=176.27, stdev=168.31 00:09:18.869 clat percentiles (usec): 00:09:18.869 | 1.00th=[ 99], 5.00th=[ 110], 10.00th=[ 118], 20.00th=[ 128], 00:09:18.869 | 30.00th=[ 137], 40.00th=[ 141], 50.00th=[ 147], 60.00th=[ 153], 00:09:18.869 | 70.00th=[ 159], 80.00th=[ 169], 90.00th=[ 184], 95.00th=[ 200], 00:09:18.869 | 99.00th=[ 231], 99.50th=[ 255], 99.90th=[ 2114], 99.95th=[ 2835], 00:09:18.869 | 99.99th=[ 7504] 00:09:18.869 bw ( KiB/s): min= 8912, max= 8912, per=92.39%, avg=8912.00, stdev= 0.00, samples=1 00:09:18.869 iops : min= 2228, max= 2228, avg=2228.00, stdev= 0.00, samples=1 00:09:18.869 lat (usec) : 100=0.61%, 250=76.65%, 500=22.41%, 750=0.04%, 1000=0.04% 00:09:18.869 lat (msec) : 2=0.07%, 4=0.11%, 10=0.07% 00:09:18.869 cpu : usr=1.30%, sys=6.40%, ctx=4462, majf=0, minf=2 00:09:18.869 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:18.869 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:18.869 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:18.869 issued rwts: total=2048,2414,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:18.869 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:18.869 00:09:18.869 Run status group 0 (all jobs): 00:09:18.869 READ: bw=8184KiB/s (8380kB/s), 8184KiB/s-8184KiB/s (8380kB/s-8380kB/s), io=8192KiB (8389kB), run=1001-1001msec 00:09:18.869 WRITE: bw=9646KiB/s (9878kB/s), 9646KiB/s-9646KiB/s (9878kB/s-9878kB/s), io=9656KiB (9888kB), run=1001-1001msec 00:09:18.869 00:09:18.869 Disk stats (read/write): 00:09:18.869 nvme0n1: ios=1987/2048, merge=0/0, ticks=550/342, in_queue=892, util=90.38% 00:09:18.869 21:31:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:19.128 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:19.128 21:31:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:19.128 21:31:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:09:19.128 21:31:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:19.128 21:31:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:19.128 21:31:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:19.128 21:31:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:19.128 21:31:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:09:19.128 21:31:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:09:19.128 21:31:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:09:19.128 21:31:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:19.128 21:31:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:09:19.128 21:31:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:19.128 21:31:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:09:19.128 21:31:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:19.129 21:31:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:19.129 rmmod nvme_tcp 00:09:19.129 rmmod nvme_fabrics 00:09:19.129 rmmod nvme_keyring 00:09:19.129 21:31:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:19.129 21:31:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:09:19.129 21:31:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:09:19.129 21:31:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 67313 ']' 00:09:19.129 21:31:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 67313 00:09:19.129 21:31:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 67313 ']' 00:09:19.129 21:31:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 67313 00:09:19.129 21:31:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:09:19.129 21:31:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:19.129 21:31:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67313 00:09:19.129 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:19.129 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:19.129 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67313' 00:09:19.129 killing process with pid 67313 00:09:19.129 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 67313 00:09:19.129 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 67313 00:09:19.388 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:19.388 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:19.388 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:19.388 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:19.388 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:19.388 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:19.388 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:19.388 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:19.388 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:19.388 ************************************ 00:09:19.388 END TEST nvmf_nmic 00:09:19.388 ************************************ 00:09:19.388 00:09:19.388 real 0m5.760s 00:09:19.388 user 0m18.715s 00:09:19.388 sys 0m1.951s 00:09:19.388 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:19.388 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:19.648 21:31:04 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:19.648 21:31:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:19.648 21:31:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:19.648 21:31:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:19.648 ************************************ 00:09:19.648 START TEST nvmf_fio_target 00:09:19.648 ************************************ 00:09:19.648 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:19.648 * Looking for test storage... 00:09:19.648 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:19.648 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:19.648 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:09:19.648 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:19.648 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:19.648 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:19.648 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:19.648 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:19.648 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:19.648 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:19.648 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:19.648 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:19.648 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:19.648 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:09:19.648 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:09:19.648 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:19.648 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:19.648 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:19.648 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:19.648 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:19.648 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:19.648 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:19.648 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:19.648 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.648 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.648 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.648 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:09:19.648 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.648 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:09:19.648 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:19.648 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:19.648 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:19.648 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:19.648 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:19.648 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:19.648 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:19.648 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:19.648 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:19.648 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:19.648 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:19.648 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:09:19.648 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:19.648 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:19.648 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:19.648 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:19.648 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:19.648 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:19.648 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:19.648 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:19.648 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:19.648 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:19.648 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:19.648 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:19.648 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:19.648 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:19.648 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:19.648 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:19.648 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:19.648 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:19.649 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:19.649 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:19.649 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:19.649 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:19.649 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:19.649 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:19.649 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:19.649 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:19.649 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:19.649 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:19.649 Cannot find device "nvmf_tgt_br" 00:09:19.649 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@155 -- # true 00:09:19.649 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:19.649 Cannot find device "nvmf_tgt_br2" 00:09:19.649 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # true 00:09:19.649 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:19.649 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:19.649 Cannot find device "nvmf_tgt_br" 00:09:19.649 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # true 00:09:19.649 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:19.649 Cannot find device "nvmf_tgt_br2" 00:09:19.649 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # true 00:09:19.649 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:19.649 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:19.649 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:19.649 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:19.649 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:09:19.649 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:19.649 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:19.649 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:09:19.649 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:19.908 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:19.908 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:19.909 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:19.909 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:19.909 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:19.909 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:19.909 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:19.909 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:19.909 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:19.909 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:19.909 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:19.909 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:19.909 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:19.909 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:19.909 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:19.909 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:19.909 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:19.909 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:19.909 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:19.909 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:19.909 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:19.909 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:19.909 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:19.909 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:19.909 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.055 ms 00:09:19.909 00:09:19.909 --- 10.0.0.2 ping statistics --- 00:09:19.909 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:19.909 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:09:19.909 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:19.909 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:19.909 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.027 ms 00:09:19.909 00:09:19.909 --- 10.0.0.3 ping statistics --- 00:09:19.909 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:19.909 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:09:19.909 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:19.909 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:19.909 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:09:19.909 00:09:19.909 --- 10.0.0.1 ping statistics --- 00:09:19.909 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:19.909 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:09:19.909 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:19.909 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@433 -- # return 0 00:09:19.909 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:19.909 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:19.909 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:19.909 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:19.909 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:19.909 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:19.909 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:19.909 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:09:19.909 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:19.909 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:19.909 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:19.909 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=67584 00:09:19.909 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:19.909 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 67584 00:09:19.909 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 67584 ']' 00:09:19.909 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:19.909 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:19.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:19.909 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:19.909 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:19.909 21:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:19.909 [2024-07-24 21:31:04.892604] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:09:19.909 [2024-07-24 21:31:04.892676] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:20.168 [2024-07-24 21:31:05.024027] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:20.168 [2024-07-24 21:31:05.117079] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:20.168 [2024-07-24 21:31:05.117336] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:20.168 [2024-07-24 21:31:05.117476] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:20.168 [2024-07-24 21:31:05.117722] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:20.168 [2024-07-24 21:31:05.117757] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:20.168 [2024-07-24 21:31:05.117968] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:20.168 [2024-07-24 21:31:05.118265] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:20.168 [2024-07-24 21:31:05.118525] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:20.168 [2024-07-24 21:31:05.118529] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:20.427 [2024-07-24 21:31:05.189046] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:20.995 21:31:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:20.995 21:31:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:09:20.995 21:31:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:20.995 21:31:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:20.995 21:31:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:20.995 21:31:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:20.995 21:31:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:21.252 [2024-07-24 21:31:06.155338] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:21.252 21:31:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:21.510 21:31:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:09:21.510 21:31:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:21.768 21:31:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:09:21.768 21:31:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:22.026 21:31:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:09:22.026 21:31:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:22.285 21:31:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:09:22.285 21:31:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:09:22.543 21:31:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:22.801 21:31:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:09:22.801 21:31:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:23.060 21:31:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:09:23.060 21:31:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:23.318 21:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:09:23.318 21:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:09:23.577 21:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:23.835 21:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:23.835 21:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:24.093 21:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:24.094 21:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:24.094 21:31:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:24.352 [2024-07-24 21:31:09.272217] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:24.352 21:31:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:09:24.611 21:31:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:09:24.869 21:31:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --hostid=987211d5-ddc7-4d0a-8ba2-cf43288d1158 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:24.869 21:31:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:09:24.869 21:31:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:09:24.869 21:31:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:24.869 21:31:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:09:24.869 21:31:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:09:24.869 21:31:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:09:27.401 21:31:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:27.401 21:31:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:27.401 21:31:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:27.401 21:31:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:09:27.401 21:31:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:27.401 21:31:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:09:27.401 21:31:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:27.401 [global] 00:09:27.401 thread=1 00:09:27.401 invalidate=1 00:09:27.401 rw=write 00:09:27.401 time_based=1 00:09:27.401 runtime=1 00:09:27.401 ioengine=libaio 00:09:27.401 direct=1 00:09:27.401 bs=4096 00:09:27.401 iodepth=1 00:09:27.401 norandommap=0 00:09:27.401 numjobs=1 00:09:27.401 00:09:27.401 verify_dump=1 00:09:27.401 verify_backlog=512 00:09:27.401 verify_state_save=0 00:09:27.401 do_verify=1 00:09:27.401 verify=crc32c-intel 00:09:27.401 [job0] 00:09:27.401 filename=/dev/nvme0n1 00:09:27.401 [job1] 00:09:27.401 filename=/dev/nvme0n2 00:09:27.401 [job2] 00:09:27.401 filename=/dev/nvme0n3 00:09:27.401 [job3] 00:09:27.401 filename=/dev/nvme0n4 00:09:27.401 Could not set queue depth (nvme0n1) 00:09:27.401 Could not set queue depth (nvme0n2) 00:09:27.401 Could not set queue depth (nvme0n3) 00:09:27.401 Could not set queue depth (nvme0n4) 00:09:27.402 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:27.402 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:27.402 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:27.402 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:27.402 fio-3.35 00:09:27.402 Starting 4 threads 00:09:28.339 00:09:28.339 job0: (groupid=0, jobs=1): err= 0: pid=67769: Wed Jul 24 21:31:13 2024 00:09:28.339 read: IOPS=2600, BW=10.2MiB/s (10.7MB/s)(10.2MiB/1001msec) 00:09:28.339 slat (nsec): min=8365, max=46402, avg=14222.59, stdev=4111.30 00:09:28.339 clat (usec): min=126, max=537, avg=196.44, stdev=58.06 00:09:28.339 lat (usec): min=138, max=551, avg=210.66, stdev=57.65 00:09:28.339 clat percentiles (usec): 00:09:28.339 | 1.00th=[ 135], 5.00th=[ 141], 10.00th=[ 145], 20.00th=[ 151], 00:09:28.339 | 30.00th=[ 157], 40.00th=[ 163], 50.00th=[ 172], 60.00th=[ 190], 00:09:28.339 | 70.00th=[ 212], 80.00th=[ 241], 90.00th=[ 302], 95.00th=[ 318], 00:09:28.339 | 99.00th=[ 351], 99.50th=[ 355], 99.90th=[ 412], 99.95th=[ 465], 00:09:28.339 | 99.99th=[ 537] 00:09:28.339 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:09:28.339 slat (nsec): min=10263, max=83436, avg=20659.01, stdev=6079.09 00:09:28.339 clat (usec): min=84, max=463, avg=123.28, stdev=23.65 00:09:28.339 lat (usec): min=102, max=488, avg=143.93, stdev=23.74 00:09:28.339 clat percentiles (usec): 00:09:28.339 | 1.00th=[ 91], 5.00th=[ 97], 10.00th=[ 101], 20.00th=[ 108], 00:09:28.339 | 30.00th=[ 112], 40.00th=[ 116], 50.00th=[ 119], 60.00th=[ 124], 00:09:28.339 | 70.00th=[ 129], 80.00th=[ 137], 90.00th=[ 149], 95.00th=[ 163], 00:09:28.339 | 99.00th=[ 204], 99.50th=[ 225], 99.90th=[ 289], 99.95th=[ 433], 00:09:28.339 | 99.99th=[ 465] 00:09:28.339 bw ( KiB/s): min=13112, max=13112, per=33.28%, avg=13112.00, stdev= 0.00, samples=1 00:09:28.339 iops : min= 3278, max= 3278, avg=3278.00, stdev= 0.00, samples=1 00:09:28.339 lat (usec) : 100=4.30%, 250=87.30%, 500=8.39%, 750=0.02% 00:09:28.339 cpu : usr=2.00%, sys=8.00%, ctx=5675, majf=0, minf=13 00:09:28.339 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:28.339 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:28.339 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:28.339 issued rwts: total=2603,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:28.339 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:28.339 job1: (groupid=0, jobs=1): err= 0: pid=67770: Wed Jul 24 21:31:13 2024 00:09:28.339 read: IOPS=2030, BW=8124KiB/s (8319kB/s)(8132KiB/1001msec) 00:09:28.339 slat (nsec): min=10743, max=54938, avg=14433.86, stdev=4152.77 00:09:28.339 clat (usec): min=141, max=1904, avg=271.10, stdev=59.34 00:09:28.339 lat (usec): min=154, max=1917, avg=285.54, stdev=59.88 00:09:28.339 clat percentiles (usec): 00:09:28.339 | 1.00th=[ 219], 5.00th=[ 227], 10.00th=[ 233], 20.00th=[ 241], 00:09:28.339 | 30.00th=[ 249], 40.00th=[ 255], 50.00th=[ 262], 60.00th=[ 269], 00:09:28.339 | 70.00th=[ 273], 80.00th=[ 285], 90.00th=[ 314], 95.00th=[ 355], 00:09:28.339 | 99.00th=[ 506], 99.50th=[ 523], 99.90th=[ 553], 99.95th=[ 586], 00:09:28.339 | 99.99th=[ 1909] 00:09:28.339 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:09:28.339 slat (usec): min=15, max=106, avg=21.68, stdev= 6.21 00:09:28.339 clat (usec): min=86, max=277, avg=179.47, stdev=32.06 00:09:28.339 lat (usec): min=105, max=355, avg=201.15, stdev=32.63 00:09:28.339 clat percentiles (usec): 00:09:28.339 | 1.00th=[ 96], 5.00th=[ 108], 10.00th=[ 124], 20.00th=[ 165], 00:09:28.339 | 30.00th=[ 172], 40.00th=[ 178], 50.00th=[ 184], 60.00th=[ 190], 00:09:28.339 | 70.00th=[ 196], 80.00th=[ 204], 90.00th=[ 215], 95.00th=[ 221], 00:09:28.339 | 99.00th=[ 243], 99.50th=[ 249], 99.90th=[ 262], 99.95th=[ 269], 00:09:28.339 | 99.99th=[ 277] 00:09:28.339 bw ( KiB/s): min= 8192, max= 8192, per=20.79%, avg=8192.00, stdev= 0.00, samples=1 00:09:28.339 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:28.339 lat (usec) : 100=1.05%, 250=65.01%, 500=33.40%, 750=0.51% 00:09:28.339 lat (msec) : 2=0.02% 00:09:28.339 cpu : usr=1.90%, sys=5.50%, ctx=4083, majf=0, minf=8 00:09:28.339 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:28.339 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:28.339 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:28.339 issued rwts: total=2033,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:28.339 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:28.339 job2: (groupid=0, jobs=1): err= 0: pid=67771: Wed Jul 24 21:31:13 2024 00:09:28.339 read: IOPS=1978, BW=7912KiB/s (8102kB/s)(7920KiB/1001msec) 00:09:28.339 slat (nsec): min=12179, max=76714, avg=15504.07, stdev=4215.50 00:09:28.339 clat (usec): min=151, max=1764, avg=264.32, stdev=47.15 00:09:28.339 lat (usec): min=165, max=1777, avg=279.82, stdev=47.50 00:09:28.339 clat percentiles (usec): 00:09:28.339 | 1.00th=[ 212], 5.00th=[ 225], 10.00th=[ 231], 20.00th=[ 241], 00:09:28.339 | 30.00th=[ 247], 40.00th=[ 253], 50.00th=[ 258], 60.00th=[ 265], 00:09:28.339 | 70.00th=[ 273], 80.00th=[ 281], 90.00th=[ 302], 95.00th=[ 330], 00:09:28.339 | 99.00th=[ 383], 99.50th=[ 424], 99.90th=[ 494], 99.95th=[ 1762], 00:09:28.339 | 99.99th=[ 1762] 00:09:28.339 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:09:28.339 slat (usec): min=17, max=114, avg=23.73, stdev= 7.11 00:09:28.339 clat (usec): min=99, max=648, avg=190.47, stdev=45.12 00:09:28.339 lat (usec): min=127, max=667, avg=214.20, stdev=48.05 00:09:28.339 clat percentiles (usec): 00:09:28.339 | 1.00th=[ 116], 5.00th=[ 126], 10.00th=[ 149], 20.00th=[ 165], 00:09:28.339 | 30.00th=[ 172], 40.00th=[ 178], 50.00th=[ 184], 60.00th=[ 192], 00:09:28.339 | 70.00th=[ 198], 80.00th=[ 208], 90.00th=[ 227], 95.00th=[ 297], 00:09:28.339 | 99.00th=[ 359], 99.50th=[ 371], 99.90th=[ 392], 99.95th=[ 396], 00:09:28.339 | 99.99th=[ 652] 00:09:28.339 bw ( KiB/s): min= 8192, max= 8192, per=20.79%, avg=8192.00, stdev= 0.00, samples=1 00:09:28.339 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:28.339 lat (usec) : 100=0.02%, 250=64.52%, 500=35.40%, 750=0.02% 00:09:28.339 lat (msec) : 2=0.02% 00:09:28.340 cpu : usr=1.80%, sys=5.80%, ctx=4030, majf=0, minf=13 00:09:28.340 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:28.340 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:28.340 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:28.340 issued rwts: total=1980,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:28.340 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:28.340 job3: (groupid=0, jobs=1): err= 0: pid=67772: Wed Jul 24 21:31:13 2024 00:09:28.340 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:09:28.340 slat (nsec): min=8521, max=62711, avg=14364.07, stdev=4131.22 00:09:28.340 clat (usec): min=134, max=7389, avg=206.82, stdev=151.22 00:09:28.340 lat (usec): min=145, max=7404, avg=221.18, stdev=151.08 00:09:28.340 clat percentiles (usec): 00:09:28.340 | 1.00th=[ 143], 5.00th=[ 151], 10.00th=[ 155], 20.00th=[ 163], 00:09:28.340 | 30.00th=[ 169], 40.00th=[ 176], 50.00th=[ 186], 60.00th=[ 202], 00:09:28.340 | 70.00th=[ 219], 80.00th=[ 251], 90.00th=[ 285], 95.00th=[ 302], 00:09:28.340 | 99.00th=[ 326], 99.50th=[ 338], 99.90th=[ 570], 99.95th=[ 865], 00:09:28.340 | 99.99th=[ 7373] 00:09:28.340 write: IOPS=2689, BW=10.5MiB/s (11.0MB/s)(10.5MiB/1001msec); 0 zone resets 00:09:28.340 slat (nsec): min=12756, max=84033, avg=21533.23, stdev=6080.44 00:09:28.340 clat (usec): min=94, max=3167, avg=136.28, stdev=80.47 00:09:28.340 lat (usec): min=112, max=3191, avg=157.82, stdev=80.54 00:09:28.340 clat percentiles (usec): 00:09:28.340 | 1.00th=[ 103], 5.00th=[ 110], 10.00th=[ 114], 20.00th=[ 119], 00:09:28.340 | 30.00th=[ 123], 40.00th=[ 127], 50.00th=[ 131], 60.00th=[ 135], 00:09:28.340 | 70.00th=[ 141], 80.00th=[ 147], 90.00th=[ 157], 95.00th=[ 167], 00:09:28.340 | 99.00th=[ 204], 99.50th=[ 237], 99.90th=[ 1762], 99.95th=[ 2114], 00:09:28.340 | 99.99th=[ 3163] 00:09:28.340 bw ( KiB/s): min=12288, max=12288, per=31.19%, avg=12288.00, stdev= 0.00, samples=1 00:09:28.340 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:28.340 lat (usec) : 100=0.27%, 250=89.57%, 500=10.00%, 750=0.06%, 1000=0.04% 00:09:28.340 lat (msec) : 2=0.02%, 4=0.04%, 10=0.02% 00:09:28.340 cpu : usr=2.20%, sys=7.30%, ctx=5254, majf=0, minf=3 00:09:28.340 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:28.340 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:28.340 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:28.340 issued rwts: total=2560,2692,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:28.340 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:28.340 00:09:28.340 Run status group 0 (all jobs): 00:09:28.340 READ: bw=35.8MiB/s (37.5MB/s), 7912KiB/s-10.2MiB/s (8102kB/s-10.7MB/s), io=35.8MiB (37.6MB), run=1001-1001msec 00:09:28.340 WRITE: bw=38.5MiB/s (40.3MB/s), 8184KiB/s-12.0MiB/s (8380kB/s-12.6MB/s), io=38.5MiB (40.4MB), run=1001-1001msec 00:09:28.340 00:09:28.340 Disk stats (read/write): 00:09:28.340 nvme0n1: ios=2404/2560, merge=0/0, ticks=508/335, in_queue=843, util=88.78% 00:09:28.340 nvme0n2: ios=1575/2022, merge=0/0, ticks=452/384, in_queue=836, util=87.94% 00:09:28.340 nvme0n3: ios=1536/1913, merge=0/0, ticks=427/393, in_queue=820, util=89.13% 00:09:28.340 nvme0n4: ios=2121/2560, merge=0/0, ticks=417/368, in_queue=785, util=89.05% 00:09:28.340 21:31:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:09:28.340 [global] 00:09:28.340 thread=1 00:09:28.340 invalidate=1 00:09:28.340 rw=randwrite 00:09:28.340 time_based=1 00:09:28.340 runtime=1 00:09:28.340 ioengine=libaio 00:09:28.340 direct=1 00:09:28.340 bs=4096 00:09:28.340 iodepth=1 00:09:28.340 norandommap=0 00:09:28.340 numjobs=1 00:09:28.340 00:09:28.340 verify_dump=1 00:09:28.340 verify_backlog=512 00:09:28.340 verify_state_save=0 00:09:28.340 do_verify=1 00:09:28.340 verify=crc32c-intel 00:09:28.340 [job0] 00:09:28.340 filename=/dev/nvme0n1 00:09:28.340 [job1] 00:09:28.340 filename=/dev/nvme0n2 00:09:28.340 [job2] 00:09:28.340 filename=/dev/nvme0n3 00:09:28.340 [job3] 00:09:28.340 filename=/dev/nvme0n4 00:09:28.340 Could not set queue depth (nvme0n1) 00:09:28.340 Could not set queue depth (nvme0n2) 00:09:28.340 Could not set queue depth (nvme0n3) 00:09:28.340 Could not set queue depth (nvme0n4) 00:09:28.598 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:28.598 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:28.598 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:28.598 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:28.598 fio-3.35 00:09:28.598 Starting 4 threads 00:09:29.973 00:09:29.974 job0: (groupid=0, jobs=1): err= 0: pid=67825: Wed Jul 24 21:31:14 2024 00:09:29.974 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:09:29.974 slat (nsec): min=11093, max=56898, avg=13424.65, stdev=3430.48 00:09:29.974 clat (usec): min=126, max=768, avg=160.37, stdev=20.39 00:09:29.974 lat (usec): min=139, max=780, avg=173.79, stdev=20.57 00:09:29.974 clat percentiles (usec): 00:09:29.974 | 1.00th=[ 133], 5.00th=[ 139], 10.00th=[ 141], 20.00th=[ 147], 00:09:29.974 | 30.00th=[ 151], 40.00th=[ 153], 50.00th=[ 157], 60.00th=[ 161], 00:09:29.974 | 70.00th=[ 167], 80.00th=[ 174], 90.00th=[ 184], 95.00th=[ 192], 00:09:29.974 | 99.00th=[ 210], 99.50th=[ 219], 99.90th=[ 265], 99.95th=[ 306], 00:09:29.974 | 99.99th=[ 766] 00:09:29.974 write: IOPS=3297, BW=12.9MiB/s (13.5MB/s)(12.9MiB/1001msec); 0 zone resets 00:09:29.974 slat (nsec): min=14201, max=96910, avg=20513.26, stdev=5263.82 00:09:29.974 clat (usec): min=87, max=453, avg=117.62, stdev=16.08 00:09:29.974 lat (usec): min=106, max=472, avg=138.13, stdev=16.73 00:09:29.974 clat percentiles (usec): 00:09:29.974 | 1.00th=[ 94], 5.00th=[ 100], 10.00th=[ 103], 20.00th=[ 108], 00:09:29.974 | 30.00th=[ 111], 40.00th=[ 113], 50.00th=[ 115], 60.00th=[ 118], 00:09:29.974 | 70.00th=[ 122], 80.00th=[ 128], 90.00th=[ 137], 95.00th=[ 145], 00:09:29.974 | 99.00th=[ 157], 99.50th=[ 165], 99.90th=[ 269], 99.95th=[ 375], 00:09:29.974 | 99.99th=[ 453] 00:09:29.974 bw ( KiB/s): min=13056, max=13056, per=30.35%, avg=13056.00, stdev= 0.00, samples=1 00:09:29.974 iops : min= 3264, max= 3264, avg=3264.00, stdev= 0.00, samples=1 00:09:29.974 lat (usec) : 100=2.89%, 250=96.92%, 500=0.17%, 1000=0.02% 00:09:29.974 cpu : usr=2.40%, sys=8.30%, ctx=6373, majf=0, minf=7 00:09:29.974 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:29.974 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:29.974 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:29.974 issued rwts: total=3072,3301,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:29.974 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:29.974 job1: (groupid=0, jobs=1): err= 0: pid=67826: Wed Jul 24 21:31:14 2024 00:09:29.974 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:09:29.974 slat (nsec): min=11012, max=47528, avg=14487.28, stdev=3566.10 00:09:29.974 clat (usec): min=122, max=652, avg=157.12, stdev=20.77 00:09:29.974 lat (usec): min=134, max=667, avg=171.61, stdev=21.05 00:09:29.974 clat percentiles (usec): 00:09:29.974 | 1.00th=[ 133], 5.00th=[ 139], 10.00th=[ 141], 20.00th=[ 145], 00:09:29.974 | 30.00th=[ 147], 40.00th=[ 151], 50.00th=[ 153], 60.00th=[ 157], 00:09:29.974 | 70.00th=[ 163], 80.00th=[ 169], 90.00th=[ 178], 95.00th=[ 186], 00:09:29.974 | 99.00th=[ 200], 99.50th=[ 206], 99.90th=[ 229], 99.95th=[ 619], 00:09:29.974 | 99.99th=[ 652] 00:09:29.974 write: IOPS=3363, BW=13.1MiB/s (13.8MB/s)(13.2MiB/1001msec); 0 zone resets 00:09:29.974 slat (usec): min=12, max=101, avg=21.08, stdev= 5.75 00:09:29.974 clat (usec): min=84, max=2132, avg=116.06, stdev=45.54 00:09:29.974 lat (usec): min=100, max=2155, avg=137.14, stdev=45.95 00:09:29.974 clat percentiles (usec): 00:09:29.974 | 1.00th=[ 94], 5.00th=[ 99], 10.00th=[ 101], 20.00th=[ 104], 00:09:29.974 | 30.00th=[ 108], 40.00th=[ 110], 50.00th=[ 112], 60.00th=[ 115], 00:09:29.974 | 70.00th=[ 119], 80.00th=[ 125], 90.00th=[ 135], 95.00th=[ 143], 00:09:29.974 | 99.00th=[ 157], 99.50th=[ 165], 99.90th=[ 510], 99.95th=[ 1450], 00:09:29.974 | 99.99th=[ 2147] 00:09:29.974 bw ( KiB/s): min=13104, max=13104, per=30.47%, avg=13104.00, stdev= 0.00, samples=1 00:09:29.974 iops : min= 3276, max= 3276, avg=3276.00, stdev= 0.00, samples=1 00:09:29.974 lat (usec) : 100=3.84%, 250=96.02%, 500=0.03%, 750=0.08% 00:09:29.974 lat (msec) : 2=0.02%, 4=0.02% 00:09:29.974 cpu : usr=2.50%, sys=8.90%, ctx=6439, majf=0, minf=8 00:09:29.974 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:29.974 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:29.974 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:29.974 issued rwts: total=3072,3367,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:29.974 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:29.974 job2: (groupid=0, jobs=1): err= 0: pid=67827: Wed Jul 24 21:31:14 2024 00:09:29.974 read: IOPS=1993, BW=7972KiB/s (8163kB/s)(7980KiB/1001msec) 00:09:29.974 slat (nsec): min=10479, max=45950, avg=14171.91, stdev=4110.71 00:09:29.974 clat (usec): min=141, max=1182, avg=259.87, stdev=43.93 00:09:29.974 lat (usec): min=152, max=1202, avg=274.04, stdev=44.13 00:09:29.974 clat percentiles (usec): 00:09:29.974 | 1.00th=[ 167], 5.00th=[ 227], 10.00th=[ 233], 20.00th=[ 239], 00:09:29.974 | 30.00th=[ 245], 40.00th=[ 249], 50.00th=[ 258], 60.00th=[ 262], 00:09:29.974 | 70.00th=[ 269], 80.00th=[ 277], 90.00th=[ 289], 95.00th=[ 302], 00:09:29.974 | 99.00th=[ 404], 99.50th=[ 478], 99.90th=[ 889], 99.95th=[ 1188], 00:09:29.974 | 99.99th=[ 1188] 00:09:29.974 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:09:29.974 slat (usec): min=16, max=101, avg=21.76, stdev= 6.08 00:09:29.974 clat (usec): min=96, max=411, avg=196.53, stdev=21.21 00:09:29.974 lat (usec): min=123, max=428, avg=218.28, stdev=21.65 00:09:29.974 clat percentiles (usec): 00:09:29.974 | 1.00th=[ 157], 5.00th=[ 172], 10.00th=[ 176], 20.00th=[ 182], 00:09:29.974 | 30.00th=[ 186], 40.00th=[ 190], 50.00th=[ 194], 60.00th=[ 198], 00:09:29.974 | 70.00th=[ 204], 80.00th=[ 210], 90.00th=[ 221], 95.00th=[ 233], 00:09:29.974 | 99.00th=[ 251], 99.50th=[ 285], 99.90th=[ 363], 99.95th=[ 367], 00:09:29.974 | 99.99th=[ 412] 00:09:29.974 bw ( KiB/s): min= 8192, max= 8192, per=19.05%, avg=8192.00, stdev= 0.00, samples=1 00:09:29.974 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:29.974 lat (usec) : 100=0.02%, 250=70.00%, 500=29.83%, 750=0.07%, 1000=0.05% 00:09:29.974 lat (msec) : 2=0.02% 00:09:29.974 cpu : usr=1.70%, sys=5.40%, ctx=4046, majf=0, minf=15 00:09:29.974 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:29.974 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:29.974 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:29.974 issued rwts: total=1995,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:29.974 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:29.974 job3: (groupid=0, jobs=1): err= 0: pid=67828: Wed Jul 24 21:31:14 2024 00:09:29.974 read: IOPS=1954, BW=7816KiB/s (8004kB/s)(7824KiB/1001msec) 00:09:29.975 slat (nsec): min=10885, max=48597, avg=13707.99, stdev=3910.53 00:09:29.975 clat (usec): min=140, max=2308, avg=265.38, stdev=78.71 00:09:29.975 lat (usec): min=153, max=2329, avg=279.09, stdev=79.60 00:09:29.975 clat percentiles (usec): 00:09:29.975 | 1.00th=[ 217], 5.00th=[ 231], 10.00th=[ 235], 20.00th=[ 241], 00:09:29.975 | 30.00th=[ 247], 40.00th=[ 251], 50.00th=[ 258], 60.00th=[ 265], 00:09:29.975 | 70.00th=[ 269], 80.00th=[ 277], 90.00th=[ 289], 95.00th=[ 306], 00:09:29.975 | 99.00th=[ 457], 99.50th=[ 586], 99.90th=[ 2147], 99.95th=[ 2311], 00:09:29.975 | 99.99th=[ 2311] 00:09:29.975 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:09:29.975 slat (nsec): min=16745, max=88535, avg=20375.60, stdev=4586.45 00:09:29.975 clat (usec): min=107, max=958, avg=197.95, stdev=28.07 00:09:29.975 lat (usec): min=125, max=987, avg=218.33, stdev=28.67 00:09:29.975 clat percentiles (usec): 00:09:29.975 | 1.00th=[ 129], 5.00th=[ 174], 10.00th=[ 178], 20.00th=[ 184], 00:09:29.975 | 30.00th=[ 188], 40.00th=[ 192], 50.00th=[ 196], 60.00th=[ 200], 00:09:29.975 | 70.00th=[ 206], 80.00th=[ 212], 90.00th=[ 221], 95.00th=[ 231], 00:09:29.975 | 99.00th=[ 247], 99.50th=[ 289], 99.90th=[ 478], 99.95th=[ 537], 00:09:29.975 | 99.99th=[ 963] 00:09:29.975 bw ( KiB/s): min= 8192, max= 8192, per=19.05%, avg=8192.00, stdev= 0.00, samples=1 00:09:29.975 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:29.975 lat (usec) : 250=68.93%, 500=30.57%, 750=0.32%, 1000=0.10% 00:09:29.975 lat (msec) : 2=0.02%, 4=0.05% 00:09:29.975 cpu : usr=1.80%, sys=4.90%, ctx=4007, majf=0, minf=15 00:09:29.975 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:29.975 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:29.975 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:29.975 issued rwts: total=1956,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:29.975 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:29.975 00:09:29.975 Run status group 0 (all jobs): 00:09:29.975 READ: bw=39.4MiB/s (41.3MB/s), 7816KiB/s-12.0MiB/s (8004kB/s-12.6MB/s), io=39.4MiB (41.3MB), run=1001-1001msec 00:09:29.975 WRITE: bw=42.0MiB/s (44.0MB/s), 8184KiB/s-13.1MiB/s (8380kB/s-13.8MB/s), io=42.0MiB (44.1MB), run=1001-1001msec 00:09:29.975 00:09:29.975 Disk stats (read/write): 00:09:29.975 nvme0n1: ios=2610/3025, merge=0/0, ticks=464/389, in_queue=853, util=89.38% 00:09:29.975 nvme0n2: ios=2612/3072, merge=0/0, ticks=440/383, in_queue=823, util=89.10% 00:09:29.975 nvme0n3: ios=1536/2017, merge=0/0, ticks=402/426, in_queue=828, util=89.44% 00:09:29.975 nvme0n4: ios=1536/2023, merge=0/0, ticks=401/419, in_queue=820, util=89.70% 00:09:29.975 21:31:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:09:29.975 [global] 00:09:29.975 thread=1 00:09:29.975 invalidate=1 00:09:29.975 rw=write 00:09:29.975 time_based=1 00:09:29.975 runtime=1 00:09:29.975 ioengine=libaio 00:09:29.975 direct=1 00:09:29.975 bs=4096 00:09:29.975 iodepth=128 00:09:29.975 norandommap=0 00:09:29.975 numjobs=1 00:09:29.975 00:09:29.975 verify_dump=1 00:09:29.975 verify_backlog=512 00:09:29.975 verify_state_save=0 00:09:29.975 do_verify=1 00:09:29.975 verify=crc32c-intel 00:09:29.975 [job0] 00:09:29.975 filename=/dev/nvme0n1 00:09:29.975 [job1] 00:09:29.975 filename=/dev/nvme0n2 00:09:29.975 [job2] 00:09:29.975 filename=/dev/nvme0n3 00:09:29.975 [job3] 00:09:29.975 filename=/dev/nvme0n4 00:09:29.975 Could not set queue depth (nvme0n1) 00:09:29.975 Could not set queue depth (nvme0n2) 00:09:29.975 Could not set queue depth (nvme0n3) 00:09:29.975 Could not set queue depth (nvme0n4) 00:09:29.975 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:29.975 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:29.975 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:29.975 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:29.975 fio-3.35 00:09:29.975 Starting 4 threads 00:09:31.355 00:09:31.355 job0: (groupid=0, jobs=1): err= 0: pid=67881: Wed Jul 24 21:31:15 2024 00:09:31.355 read: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec) 00:09:31.355 slat (usec): min=5, max=12940, avg=189.14, stdev=1102.11 00:09:31.355 clat (usec): min=10868, max=52756, avg=23726.21, stdev=9957.31 00:09:31.355 lat (usec): min=13084, max=52773, avg=23915.35, stdev=9982.89 00:09:31.355 clat percentiles (usec): 00:09:31.355 | 1.00th=[12780], 5.00th=[14091], 10.00th=[14877], 20.00th=[15926], 00:09:31.355 | 30.00th=[16057], 40.00th=[16319], 50.00th=[21890], 60.00th=[24773], 00:09:31.355 | 70.00th=[25560], 80.00th=[26870], 90.00th=[40109], 95.00th=[46400], 00:09:31.355 | 99.00th=[52691], 99.50th=[52691], 99.90th=[52691], 99.95th=[52691], 00:09:31.355 | 99.99th=[52691] 00:09:31.355 write: IOPS=3127, BW=12.2MiB/s (12.8MB/s)(12.3MiB/1003msec); 0 zone resets 00:09:31.355 slat (usec): min=11, max=12360, avg=125.50, stdev=648.95 00:09:31.355 clat (usec): min=579, max=44476, avg=16804.51, stdev=7103.19 00:09:31.355 lat (usec): min=3499, max=44506, avg=16930.01, stdev=7106.42 00:09:31.355 clat percentiles (usec): 00:09:31.355 | 1.00th=[ 4490], 5.00th=[11994], 10.00th=[12125], 20.00th=[12387], 00:09:31.355 | 30.00th=[12518], 40.00th=[12780], 50.00th=[15008], 60.00th=[16188], 00:09:31.355 | 70.00th=[17171], 80.00th=[19792], 90.00th=[25560], 95.00th=[35914], 00:09:31.355 | 99.00th=[39584], 99.50th=[44303], 99.90th=[44303], 99.95th=[44303], 00:09:31.355 | 99.99th=[44303] 00:09:31.355 bw ( KiB/s): min=12288, max=12312, per=16.69%, avg=12300.00, stdev=16.97, samples=2 00:09:31.355 iops : min= 3072, max= 3078, avg=3075.00, stdev= 4.24, samples=2 00:09:31.355 lat (usec) : 750=0.02% 00:09:31.355 lat (msec) : 4=0.23%, 10=1.10%, 20=62.49%, 50=34.18%, 100=2.00% 00:09:31.355 cpu : usr=3.29%, sys=8.98%, ctx=198, majf=0, minf=8 00:09:31.355 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:09:31.355 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:31.355 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:31.355 issued rwts: total=3072,3137,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:31.355 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:31.355 job1: (groupid=0, jobs=1): err= 0: pid=67882: Wed Jul 24 21:31:15 2024 00:09:31.355 read: IOPS=6131, BW=24.0MiB/s (25.1MB/s)(24.0MiB/1002msec) 00:09:31.355 slat (usec): min=5, max=2653, avg=76.03, stdev=293.55 00:09:31.355 clat (usec): min=7573, max=13179, avg=10109.48, stdev=719.56 00:09:31.355 lat (usec): min=7592, max=13319, avg=10185.52, stdev=757.73 00:09:31.355 clat percentiles (usec): 00:09:31.355 | 1.00th=[ 8291], 5.00th=[ 8979], 10.00th=[ 9503], 20.00th=[ 9765], 00:09:31.355 | 30.00th=[ 9896], 40.00th=[ 9896], 50.00th=[10028], 60.00th=[10159], 00:09:31.355 | 70.00th=[10159], 80.00th=[10421], 90.00th=[11076], 95.00th=[11600], 00:09:31.355 | 99.00th=[12125], 99.50th=[12649], 99.90th=[12911], 99.95th=[13042], 00:09:31.355 | 99.99th=[13173] 00:09:31.355 write: IOPS=6625, BW=25.9MiB/s (27.1MB/s)(25.9MiB/1002msec); 0 zone resets 00:09:31.355 slat (usec): min=10, max=2749, avg=73.41, stdev=315.64 00:09:31.355 clat (usec): min=1518, max=13343, avg=9723.79, stdev=1020.45 00:09:31.355 lat (usec): min=1537, max=13399, avg=9797.20, stdev=1062.54 00:09:31.355 clat percentiles (usec): 00:09:31.355 | 1.00th=[ 5145], 5.00th=[ 8848], 10.00th=[ 9110], 20.00th=[ 9241], 00:09:31.355 | 30.00th=[ 9372], 40.00th=[ 9503], 50.00th=[ 9634], 60.00th=[ 9765], 00:09:31.355 | 70.00th=[ 9896], 80.00th=[10421], 90.00th=[10814], 95.00th=[11338], 00:09:31.355 | 99.00th=[12387], 99.50th=[12649], 99.90th=[13042], 99.95th=[13173], 00:09:31.355 | 99.99th=[13304] 00:09:31.355 bw ( KiB/s): min=25731, max=26416, per=35.38%, avg=26073.50, stdev=484.37, samples=2 00:09:31.355 iops : min= 6432, max= 6604, avg=6518.00, stdev=121.62, samples=2 00:09:31.355 lat (msec) : 2=0.17%, 4=0.04%, 10=58.34%, 20=41.45% 00:09:31.355 cpu : usr=6.19%, sys=14.89%, ctx=561, majf=0, minf=1 00:09:31.355 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:09:31.355 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:31.355 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:31.355 issued rwts: total=6144,6639,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:31.355 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:31.355 job2: (groupid=0, jobs=1): err= 0: pid=67883: Wed Jul 24 21:31:15 2024 00:09:31.355 read: IOPS=5409, BW=21.1MiB/s (22.2MB/s)(21.2MiB/1001msec) 00:09:31.355 slat (usec): min=4, max=3166, avg=88.37, stdev=354.35 00:09:31.355 clat (usec): min=464, max=15155, avg=11570.74, stdev=1168.55 00:09:31.355 lat (usec): min=2052, max=15164, avg=11659.10, stdev=1199.32 00:09:31.355 clat percentiles (usec): 00:09:31.355 | 1.00th=[ 6063], 5.00th=[10028], 10.00th=[10814], 20.00th=[11338], 00:09:31.355 | 30.00th=[11469], 40.00th=[11469], 50.00th=[11600], 60.00th=[11600], 00:09:31.355 | 70.00th=[11731], 80.00th=[11994], 90.00th=[12911], 95.00th=[13304], 00:09:31.355 | 99.00th=[13698], 99.50th=[14091], 99.90th=[15139], 99.95th=[15139], 00:09:31.355 | 99.99th=[15139] 00:09:31.355 write: IOPS=5626, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1001msec); 0 zone resets 00:09:31.355 slat (usec): min=10, max=3830, avg=86.90, stdev=407.60 00:09:31.355 clat (usec): min=8364, max=15712, avg=11333.26, stdev=829.13 00:09:31.355 lat (usec): min=8384, max=15729, avg=11420.16, stdev=908.50 00:09:31.355 clat percentiles (usec): 00:09:31.355 | 1.00th=[ 9372], 5.00th=[10552], 10.00th=[10683], 20.00th=[10814], 00:09:31.355 | 30.00th=[10945], 40.00th=[11076], 50.00th=[11076], 60.00th=[11207], 00:09:31.355 | 70.00th=[11469], 80.00th=[11731], 90.00th=[12125], 95.00th=[13304], 00:09:31.355 | 99.00th=[14091], 99.50th=[14353], 99.90th=[15664], 99.95th=[15664], 00:09:31.355 | 99.99th=[15664] 00:09:31.355 bw ( KiB/s): min=22536, max=22536, per=30.58%, avg=22536.00, stdev= 0.00, samples=1 00:09:31.355 iops : min= 5634, max= 5634, avg=5634.00, stdev= 0.00, samples=1 00:09:31.355 lat (usec) : 500=0.01% 00:09:31.355 lat (msec) : 4=0.18%, 10=3.41%, 20=96.40% 00:09:31.355 cpu : usr=3.30%, sys=13.20%, ctx=454, majf=0, minf=1 00:09:31.355 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:09:31.355 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:31.355 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:31.355 issued rwts: total=5415,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:31.355 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:31.355 job3: (groupid=0, jobs=1): err= 0: pid=67884: Wed Jul 24 21:31:15 2024 00:09:31.355 read: IOPS=2823, BW=11.0MiB/s (11.6MB/s)(11.1MiB/1003msec) 00:09:31.355 slat (usec): min=5, max=11796, avg=168.56, stdev=834.96 00:09:31.355 clat (usec): min=750, max=43781, avg=20638.64, stdev=4745.10 00:09:31.355 lat (usec): min=4377, max=43798, avg=20807.20, stdev=4803.41 00:09:31.355 clat percentiles (usec): 00:09:31.355 | 1.00th=[ 8979], 5.00th=[15139], 10.00th=[17171], 20.00th=[18482], 00:09:31.355 | 30.00th=[18744], 40.00th=[19006], 50.00th=[19268], 60.00th=[19530], 00:09:31.355 | 70.00th=[21627], 80.00th=[23987], 90.00th=[26346], 95.00th=[27919], 00:09:31.355 | 99.00th=[39060], 99.50th=[42206], 99.90th=[43779], 99.95th=[43779], 00:09:31.355 | 99.99th=[43779] 00:09:31.355 write: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec); 0 zone resets 00:09:31.355 slat (usec): min=13, max=7061, avg=161.69, stdev=698.69 00:09:31.355 clat (usec): min=9163, max=56991, avg=22198.34, stdev=11841.68 00:09:31.355 lat (usec): min=9186, max=57018, avg=22360.03, stdev=11922.41 00:09:31.355 clat percentiles (usec): 00:09:31.355 | 1.00th=[11338], 5.00th=[12387], 10.00th=[12780], 20.00th=[13173], 00:09:31.355 | 30.00th=[13304], 40.00th=[13698], 50.00th=[16319], 60.00th=[16909], 00:09:31.355 | 70.00th=[31589], 80.00th=[34341], 90.00th=[41681], 95.00th=[45876], 00:09:31.355 | 99.00th=[50070], 99.50th=[53216], 99.90th=[56886], 99.95th=[56886], 00:09:31.355 | 99.99th=[56886] 00:09:31.355 bw ( KiB/s): min=11144, max=13458, per=16.69%, avg=12301.00, stdev=1636.25, samples=2 00:09:31.355 iops : min= 2786, max= 3364, avg=3075.00, stdev=408.71, samples=2 00:09:31.355 lat (usec) : 1000=0.02% 00:09:31.355 lat (msec) : 10=0.83%, 20=63.43%, 50=35.21%, 100=0.51% 00:09:31.356 cpu : usr=2.79%, sys=9.88%, ctx=275, majf=0, minf=1 00:09:31.356 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:09:31.356 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:31.356 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:31.356 issued rwts: total=2832,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:31.356 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:31.356 00:09:31.356 Run status group 0 (all jobs): 00:09:31.356 READ: bw=68.0MiB/s (71.3MB/s), 11.0MiB/s-24.0MiB/s (11.6MB/s-25.1MB/s), io=68.2MiB (71.5MB), run=1001-1003msec 00:09:31.356 WRITE: bw=72.0MiB/s (75.5MB/s), 12.0MiB/s-25.9MiB/s (12.5MB/s-27.1MB/s), io=72.2MiB (75.7MB), run=1001-1003msec 00:09:31.356 00:09:31.356 Disk stats (read/write): 00:09:31.356 nvme0n1: ios=2642/3072, merge=0/0, ticks=14303/11017, in_queue=25320, util=89.18% 00:09:31.356 nvme0n2: ios=5527/5632, merge=0/0, ticks=17203/14983, in_queue=32186, util=89.00% 00:09:31.356 nvme0n3: ios=4608/4988, merge=0/0, ticks=17161/16013, in_queue=33174, util=89.16% 00:09:31.356 nvme0n4: ios=2299/2560, merge=0/0, ticks=24182/27597, in_queue=51779, util=89.80% 00:09:31.356 21:31:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:09:31.356 [global] 00:09:31.356 thread=1 00:09:31.356 invalidate=1 00:09:31.356 rw=randwrite 00:09:31.356 time_based=1 00:09:31.356 runtime=1 00:09:31.356 ioengine=libaio 00:09:31.356 direct=1 00:09:31.356 bs=4096 00:09:31.356 iodepth=128 00:09:31.356 norandommap=0 00:09:31.356 numjobs=1 00:09:31.356 00:09:31.356 verify_dump=1 00:09:31.356 verify_backlog=512 00:09:31.356 verify_state_save=0 00:09:31.356 do_verify=1 00:09:31.356 verify=crc32c-intel 00:09:31.356 [job0] 00:09:31.356 filename=/dev/nvme0n1 00:09:31.356 [job1] 00:09:31.356 filename=/dev/nvme0n2 00:09:31.356 [job2] 00:09:31.356 filename=/dev/nvme0n3 00:09:31.356 [job3] 00:09:31.356 filename=/dev/nvme0n4 00:09:31.356 Could not set queue depth (nvme0n1) 00:09:31.356 Could not set queue depth (nvme0n2) 00:09:31.356 Could not set queue depth (nvme0n3) 00:09:31.356 Could not set queue depth (nvme0n4) 00:09:31.356 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:31.356 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:31.356 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:31.356 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:31.356 fio-3.35 00:09:31.356 Starting 4 threads 00:09:32.734 00:09:32.734 job0: (groupid=0, jobs=1): err= 0: pid=67949: Wed Jul 24 21:31:17 2024 00:09:32.734 read: IOPS=2986, BW=11.7MiB/s (12.2MB/s)(11.8MiB/1009msec) 00:09:32.734 slat (usec): min=9, max=16701, avg=168.23, stdev=1292.65 00:09:32.734 clat (usec): min=7935, max=36792, avg=22393.67, stdev=3208.40 00:09:32.734 lat (usec): min=7947, max=42043, avg=22561.90, stdev=3395.23 00:09:32.734 clat percentiles (usec): 00:09:32.734 | 1.00th=[ 8455], 5.00th=[18744], 10.00th=[20055], 20.00th=[21103], 00:09:32.734 | 30.00th=[21890], 40.00th=[22152], 50.00th=[22414], 60.00th=[22414], 00:09:32.734 | 70.00th=[22676], 80.00th=[23462], 90.00th=[26870], 95.00th=[27657], 00:09:32.734 | 99.00th=[29230], 99.50th=[34341], 99.90th=[36439], 99.95th=[36439], 00:09:32.734 | 99.99th=[36963] 00:09:32.734 write: IOPS=3044, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1009msec); 0 zone resets 00:09:32.734 slat (usec): min=5, max=14469, avg=152.15, stdev=1014.14 00:09:32.734 clat (usec): min=8483, max=27680, avg=19660.32, stdev=3078.34 00:09:32.734 lat (usec): min=8508, max=27726, avg=19812.47, stdev=2942.55 00:09:32.734 clat percentiles (usec): 00:09:32.734 | 1.00th=[ 9765], 5.00th=[12911], 10.00th=[15795], 20.00th=[18220], 00:09:32.734 | 30.00th=[19530], 40.00th=[20055], 50.00th=[20317], 60.00th=[20841], 00:09:32.734 | 70.00th=[21103], 80.00th=[21365], 90.00th=[21890], 95.00th=[23200], 00:09:32.734 | 99.00th=[27132], 99.50th=[27132], 99.90th=[27132], 99.95th=[27657], 00:09:32.734 | 99.99th=[27657] 00:09:32.734 bw ( KiB/s): min=12288, max=12288, per=17.43%, avg=12288.00, stdev= 0.00, samples=2 00:09:32.734 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:09:32.734 lat (msec) : 10=2.33%, 20=22.28%, 50=75.38% 00:09:32.734 cpu : usr=2.98%, sys=8.83%, ctx=133, majf=0, minf=15 00:09:32.734 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:09:32.734 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:32.734 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:32.734 issued rwts: total=3013,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:32.734 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:32.734 job1: (groupid=0, jobs=1): err= 0: pid=67950: Wed Jul 24 21:31:17 2024 00:09:32.734 read: IOPS=5998, BW=23.4MiB/s (24.6MB/s)(23.5MiB/1002msec) 00:09:32.734 slat (usec): min=3, max=6158, avg=79.14, stdev=466.67 00:09:32.734 clat (usec): min=874, max=28151, avg=10879.08, stdev=2402.42 00:09:32.734 lat (usec): min=3198, max=29163, avg=10958.22, stdev=2428.96 00:09:32.734 clat percentiles (usec): 00:09:32.734 | 1.00th=[ 5735], 5.00th=[ 8225], 10.00th=[ 9896], 20.00th=[10159], 00:09:32.734 | 30.00th=[10421], 40.00th=[10552], 50.00th=[10552], 60.00th=[10683], 00:09:32.734 | 70.00th=[10814], 80.00th=[11076], 90.00th=[11338], 95.00th=[13829], 00:09:32.734 | 99.00th=[23200], 99.50th=[24249], 99.90th=[24249], 99.95th=[24249], 00:09:32.734 | 99.99th=[28181] 00:09:32.734 write: IOPS=6131, BW=24.0MiB/s (25.1MB/s)(24.0MiB/1002msec); 0 zone resets 00:09:32.735 slat (usec): min=8, max=6270, avg=77.97, stdev=415.72 00:09:32.735 clat (usec): min=4978, max=20630, avg=10015.28, stdev=1461.70 00:09:32.735 lat (usec): min=6206, max=22511, avg=10093.24, stdev=1430.17 00:09:32.735 clat percentiles (usec): 00:09:32.735 | 1.00th=[ 6652], 5.00th=[ 8586], 10.00th=[ 8848], 20.00th=[ 9110], 00:09:32.735 | 30.00th=[ 9503], 40.00th=[ 9765], 50.00th=[ 9896], 60.00th=[10028], 00:09:32.735 | 70.00th=[10159], 80.00th=[10290], 90.00th=[11338], 95.00th=[13698], 00:09:32.735 | 99.00th=[15270], 99.50th=[16188], 99.90th=[17171], 99.95th=[17171], 00:09:32.735 | 99.99th=[20579] 00:09:32.735 bw ( KiB/s): min=24520, max=24632, per=34.86%, avg=24576.00, stdev=79.20, samples=2 00:09:32.735 iops : min= 6130, max= 6158, avg=6144.00, stdev=19.80, samples=2 00:09:32.735 lat (usec) : 1000=0.01% 00:09:32.735 lat (msec) : 4=0.38%, 10=36.27%, 20=62.20%, 50=1.14% 00:09:32.735 cpu : usr=5.69%, sys=14.79%, ctx=402, majf=0, minf=13 00:09:32.735 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:09:32.735 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:32.735 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:32.735 issued rwts: total=6010,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:32.735 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:32.735 job2: (groupid=0, jobs=1): err= 0: pid=67951: Wed Jul 24 21:31:17 2024 00:09:32.735 read: IOPS=5094, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1005msec) 00:09:32.735 slat (usec): min=4, max=9465, avg=92.78, stdev=475.01 00:09:32.735 clat (usec): min=7594, max=34465, avg=12497.80, stdev=2855.08 00:09:32.735 lat (usec): min=7617, max=34488, avg=12590.59, stdev=2852.83 00:09:32.735 clat percentiles (usec): 00:09:32.735 | 1.00th=[ 9241], 5.00th=[11338], 10.00th=[11469], 20.00th=[11600], 00:09:32.735 | 30.00th=[11731], 40.00th=[11731], 50.00th=[11863], 60.00th=[11994], 00:09:32.735 | 70.00th=[11994], 80.00th=[12125], 90.00th=[12911], 95.00th=[17957], 00:09:32.735 | 99.00th=[27657], 99.50th=[27657], 99.90th=[27657], 99.95th=[28705], 00:09:32.735 | 99.99th=[34341] 00:09:32.735 write: IOPS=5469, BW=21.4MiB/s (22.4MB/s)(21.5MiB/1005msec); 0 zone resets 00:09:32.735 slat (usec): min=10, max=7605, avg=88.72, stdev=411.20 00:09:32.735 clat (usec): min=612, max=21988, avg=11443.82, stdev=1920.18 00:09:32.735 lat (usec): min=4676, max=22002, avg=11532.54, stdev=1891.39 00:09:32.735 clat percentiles (usec): 00:09:32.735 | 1.00th=[ 5014], 5.00th=[ 9110], 10.00th=[10683], 20.00th=[10945], 00:09:32.735 | 30.00th=[11076], 40.00th=[11207], 50.00th=[11338], 60.00th=[11338], 00:09:32.735 | 70.00th=[11469], 80.00th=[11600], 90.00th=[12518], 95.00th=[13960], 00:09:32.735 | 99.00th=[21627], 99.50th=[21890], 99.90th=[21890], 99.95th=[21890], 00:09:32.735 | 99.99th=[21890] 00:09:32.735 bw ( KiB/s): min=20160, max=22792, per=30.46%, avg=21476.00, stdev=1861.11, samples=2 00:09:32.735 iops : min= 5040, max= 5698, avg=5369.00, stdev=465.28, samples=2 00:09:32.735 lat (usec) : 750=0.01% 00:09:32.735 lat (msec) : 10=5.57%, 20=91.67%, 50=2.75% 00:09:32.735 cpu : usr=5.58%, sys=12.85%, ctx=344, majf=0, minf=11 00:09:32.735 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:09:32.735 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:32.735 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:32.735 issued rwts: total=5120,5497,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:32.735 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:32.735 job3: (groupid=0, jobs=1): err= 0: pid=67952: Wed Jul 24 21:31:17 2024 00:09:32.735 read: IOPS=2931, BW=11.5MiB/s (12.0MB/s)(11.5MiB/1008msec) 00:09:32.735 slat (usec): min=8, max=11528, avg=154.75, stdev=981.31 00:09:32.735 clat (usec): min=3710, max=38892, avg=21800.52, stdev=3042.51 00:09:32.735 lat (usec): min=8557, max=44270, avg=21955.26, stdev=3004.11 00:09:32.735 clat percentiles (usec): 00:09:32.735 | 1.00th=[13698], 5.00th=[15795], 10.00th=[19268], 20.00th=[21103], 00:09:32.735 | 30.00th=[21627], 40.00th=[21890], 50.00th=[22152], 60.00th=[22152], 00:09:32.735 | 70.00th=[22414], 80.00th=[22676], 90.00th=[23987], 95.00th=[24511], 00:09:32.735 | 99.00th=[38011], 99.50th=[38536], 99.90th=[39060], 99.95th=[39060], 00:09:32.735 | 99.99th=[39060] 00:09:32.735 write: IOPS=3047, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1008msec); 0 zone resets 00:09:32.735 slat (usec): min=6, max=22263, avg=169.17, stdev=1105.33 00:09:32.735 clat (usec): min=10265, max=35338, avg=20588.06, stdev=2835.13 00:09:32.735 lat (usec): min=14426, max=35369, avg=20757.23, stdev=2672.44 00:09:32.735 clat percentiles (usec): 00:09:32.735 | 1.00th=[12649], 5.00th=[16712], 10.00th=[18744], 20.00th=[19006], 00:09:32.735 | 30.00th=[19792], 40.00th=[20055], 50.00th=[20579], 60.00th=[20841], 00:09:32.735 | 70.00th=[21103], 80.00th=[21627], 90.00th=[21890], 95.00th=[25035], 00:09:32.735 | 99.00th=[34866], 99.50th=[34866], 99.90th=[35390], 99.95th=[35390], 00:09:32.735 | 99.99th=[35390] 00:09:32.735 bw ( KiB/s): min=12288, max=12288, per=17.43%, avg=12288.00, stdev= 0.00, samples=2 00:09:32.735 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:09:32.735 lat (msec) : 4=0.02%, 10=0.17%, 20=24.07%, 50=75.74% 00:09:32.735 cpu : usr=2.38%, sys=9.63%, ctx=131, majf=0, minf=11 00:09:32.735 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:09:32.735 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:32.735 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:32.735 issued rwts: total=2955,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:32.735 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:32.735 00:09:32.735 Run status group 0 (all jobs): 00:09:32.735 READ: bw=66.2MiB/s (69.4MB/s), 11.5MiB/s-23.4MiB/s (12.0MB/s-24.6MB/s), io=66.8MiB (70.0MB), run=1002-1009msec 00:09:32.735 WRITE: bw=68.9MiB/s (72.2MB/s), 11.9MiB/s-24.0MiB/s (12.5MB/s-25.1MB/s), io=69.5MiB (72.8MB), run=1002-1009msec 00:09:32.735 00:09:32.735 Disk stats (read/write): 00:09:32.735 nvme0n1: ios=2562/2560, merge=0/0, ticks=55105/48639, in_queue=103744, util=89.38% 00:09:32.735 nvme0n2: ios=5162/5568, merge=0/0, ticks=50693/48036, in_queue=98729, util=88.17% 00:09:32.735 nvme0n3: ios=4480/4608, merge=0/0, ticks=12927/10987, in_queue=23914, util=88.48% 00:09:32.735 nvme0n4: ios=2448/2560, merge=0/0, ticks=51856/51031, in_queue=102887, util=89.75% 00:09:32.735 21:31:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:09:32.735 21:31:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=67966 00:09:32.735 21:31:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:09:32.735 21:31:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:09:32.735 [global] 00:09:32.735 thread=1 00:09:32.735 invalidate=1 00:09:32.735 rw=read 00:09:32.735 time_based=1 00:09:32.735 runtime=10 00:09:32.735 ioengine=libaio 00:09:32.735 direct=1 00:09:32.735 bs=4096 00:09:32.735 iodepth=1 00:09:32.735 norandommap=1 00:09:32.735 numjobs=1 00:09:32.735 00:09:32.735 [job0] 00:09:32.735 filename=/dev/nvme0n1 00:09:32.735 [job1] 00:09:32.735 filename=/dev/nvme0n2 00:09:32.735 [job2] 00:09:32.735 filename=/dev/nvme0n3 00:09:32.735 [job3] 00:09:32.735 filename=/dev/nvme0n4 00:09:32.735 Could not set queue depth (nvme0n1) 00:09:32.735 Could not set queue depth (nvme0n2) 00:09:32.735 Could not set queue depth (nvme0n3) 00:09:32.735 Could not set queue depth (nvme0n4) 00:09:32.735 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:32.735 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:32.735 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:32.735 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:32.735 fio-3.35 00:09:32.735 Starting 4 threads 00:09:36.061 21:31:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:09:36.061 fio: pid=68009, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:09:36.061 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=51187712, buflen=4096 00:09:36.061 21:31:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:09:36.061 fio: pid=68008, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:09:36.061 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=71315456, buflen=4096 00:09:36.061 21:31:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:36.061 21:31:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:09:36.320 fio: pid=68006, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:09:36.320 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=60403712, buflen=4096 00:09:36.320 21:31:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:36.320 21:31:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:09:36.579 fio: pid=68007, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:09:36.579 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=20418560, buflen=4096 00:09:36.579 00:09:36.579 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=68006: Wed Jul 24 21:31:21 2024 00:09:36.579 read: IOPS=4337, BW=16.9MiB/s (17.8MB/s)(57.6MiB/3400msec) 00:09:36.579 slat (usec): min=7, max=14812, avg=15.85, stdev=212.35 00:09:36.579 clat (usec): min=111, max=7274, avg=213.51, stdev=130.63 00:09:36.579 lat (usec): min=131, max=14980, avg=229.36, stdev=248.66 00:09:36.579 clat percentiles (usec): 00:09:36.579 | 1.00th=[ 133], 5.00th=[ 141], 10.00th=[ 147], 20.00th=[ 157], 00:09:36.579 | 30.00th=[ 178], 40.00th=[ 212], 50.00th=[ 223], 60.00th=[ 229], 00:09:36.579 | 70.00th=[ 237], 80.00th=[ 245], 90.00th=[ 255], 95.00th=[ 265], 00:09:36.579 | 99.00th=[ 297], 99.50th=[ 396], 99.90th=[ 1876], 99.95th=[ 3490], 00:09:36.579 | 99.99th=[ 5932] 00:09:36.579 bw ( KiB/s): min=15648, max=21168, per=23.35%, avg=16860.00, stdev=2120.09, samples=6 00:09:36.579 iops : min= 3912, max= 5292, avg=4215.00, stdev=530.02, samples=6 00:09:36.579 lat (usec) : 250=85.74%, 500=13.93%, 750=0.16%, 1000=0.02% 00:09:36.579 lat (msec) : 2=0.06%, 4=0.05%, 10=0.03% 00:09:36.579 cpu : usr=1.06%, sys=4.62%, ctx=14754, majf=0, minf=1 00:09:36.579 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:36.579 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:36.579 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:36.579 issued rwts: total=14748,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:36.579 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:36.579 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=68007: Wed Jul 24 21:31:21 2024 00:09:36.579 read: IOPS=5843, BW=22.8MiB/s (23.9MB/s)(83.5MiB/3657msec) 00:09:36.579 slat (usec): min=9, max=16564, avg=15.49, stdev=177.61 00:09:36.579 clat (usec): min=112, max=33512, avg=154.18, stdev=257.53 00:09:36.579 lat (usec): min=123, max=33541, avg=169.67, stdev=313.11 00:09:36.579 clat percentiles (usec): 00:09:36.579 | 1.00th=[ 122], 5.00th=[ 129], 10.00th=[ 133], 20.00th=[ 139], 00:09:36.579 | 30.00th=[ 143], 40.00th=[ 145], 50.00th=[ 149], 60.00th=[ 151], 00:09:36.579 | 70.00th=[ 157], 80.00th=[ 163], 90.00th=[ 174], 95.00th=[ 182], 00:09:36.579 | 99.00th=[ 198], 99.50th=[ 206], 99.90th=[ 619], 99.95th=[ 1287], 00:09:36.579 | 99.99th=[ 2507] 00:09:36.579 bw ( KiB/s): min=21213, max=24632, per=32.50%, avg=23471.57, stdev=1321.44, samples=7 00:09:36.579 iops : min= 5303, max= 6158, avg=5867.86, stdev=330.43, samples=7 00:09:36.579 lat (usec) : 250=99.76%, 500=0.12%, 750=0.05%, 1000=0.01% 00:09:36.579 lat (msec) : 2=0.04%, 4=0.01%, 20=0.01%, 50=0.01% 00:09:36.579 cpu : usr=1.75%, sys=6.76%, ctx=21378, majf=0, minf=1 00:09:36.579 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:36.579 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:36.579 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:36.579 issued rwts: total=21370,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:36.579 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:36.579 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=68008: Wed Jul 24 21:31:21 2024 00:09:36.579 read: IOPS=5444, BW=21.3MiB/s (22.3MB/s)(68.0MiB/3198msec) 00:09:36.579 slat (usec): min=10, max=11843, avg=14.56, stdev=106.00 00:09:36.579 clat (usec): min=122, max=2214, avg=167.71, stdev=27.66 00:09:36.579 lat (usec): min=134, max=12021, avg=182.27, stdev=109.70 00:09:36.579 clat percentiles (usec): 00:09:36.579 | 1.00th=[ 139], 5.00th=[ 145], 10.00th=[ 149], 20.00th=[ 153], 00:09:36.579 | 30.00th=[ 157], 40.00th=[ 161], 50.00th=[ 165], 60.00th=[ 169], 00:09:36.579 | 70.00th=[ 176], 80.00th=[ 182], 90.00th=[ 192], 95.00th=[ 198], 00:09:36.579 | 99.00th=[ 215], 99.50th=[ 223], 99.90th=[ 396], 99.95th=[ 586], 00:09:36.579 | 99.99th=[ 1090] 00:09:36.579 bw ( KiB/s): min=21136, max=22224, per=30.30%, avg=21880.00, stdev=472.56, samples=6 00:09:36.579 iops : min= 5284, max= 5556, avg=5470.00, stdev=118.14, samples=6 00:09:36.579 lat (usec) : 250=99.80%, 500=0.13%, 750=0.03%, 1000=0.02% 00:09:36.579 lat (msec) : 2=0.01%, 4=0.01% 00:09:36.579 cpu : usr=1.97%, sys=6.04%, ctx=17423, majf=0, minf=1 00:09:36.579 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:36.579 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:36.579 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:36.579 issued rwts: total=17412,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:36.579 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:36.579 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=68009: Wed Jul 24 21:31:21 2024 00:09:36.579 read: IOPS=4264, BW=16.7MiB/s (17.5MB/s)(48.8MiB/2931msec) 00:09:36.579 slat (nsec): min=7429, max=67282, avg=12923.05, stdev=4030.71 00:09:36.579 clat (usec): min=125, max=1910, avg=220.46, stdev=45.83 00:09:36.579 lat (usec): min=146, max=1920, avg=233.38, stdev=45.66 00:09:36.579 clat percentiles (usec): 00:09:36.579 | 1.00th=[ 147], 5.00th=[ 155], 10.00th=[ 163], 20.00th=[ 192], 00:09:36.579 | 30.00th=[ 212], 40.00th=[ 219], 50.00th=[ 225], 60.00th=[ 231], 00:09:36.579 | 70.00th=[ 239], 80.00th=[ 245], 90.00th=[ 255], 95.00th=[ 265], 00:09:36.579 | 99.00th=[ 289], 99.50th=[ 338], 99.90th=[ 537], 99.95th=[ 668], 00:09:36.579 | 99.99th=[ 1827] 00:09:36.579 bw ( KiB/s): min=15968, max=21664, per=23.96%, avg=17302.40, stdev=2445.22, samples=5 00:09:36.579 iops : min= 3992, max= 5416, avg=4325.60, stdev=611.31, samples=5 00:09:36.579 lat (usec) : 250=85.50%, 500=14.36%, 750=0.08% 00:09:36.579 lat (msec) : 2=0.05% 00:09:36.579 cpu : usr=0.96%, sys=4.91%, ctx=12499, majf=0, minf=1 00:09:36.579 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:36.579 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:36.579 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:36.579 issued rwts: total=12498,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:36.579 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:36.579 00:09:36.579 Run status group 0 (all jobs): 00:09:36.579 READ: bw=70.5MiB/s (73.9MB/s), 16.7MiB/s-22.8MiB/s (17.5MB/s-23.9MB/s), io=258MiB (270MB), run=2931-3657msec 00:09:36.579 00:09:36.579 Disk stats (read/write): 00:09:36.579 nvme0n1: ios=14580/0, merge=0/0, ticks=2985/0, in_queue=2985, util=94.59% 00:09:36.579 nvme0n2: ios=21137/0, merge=0/0, ticks=3333/0, in_queue=3333, util=95.40% 00:09:36.579 nvme0n3: ios=16998/0, merge=0/0, ticks=2884/0, in_queue=2884, util=96.30% 00:09:36.579 nvme0n4: ios=12251/0, merge=0/0, ticks=2648/0, in_queue=2648, util=96.73% 00:09:36.579 21:31:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:36.579 21:31:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:09:36.838 21:31:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:36.839 21:31:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:09:37.097 21:31:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:37.098 21:31:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:09:37.356 21:31:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:37.356 21:31:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:09:37.614 21:31:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:37.614 21:31:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:09:37.873 21:31:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:09:37.874 21:31:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 67966 00:09:37.874 21:31:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:09:37.874 21:31:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:37.874 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:37.874 21:31:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:37.874 21:31:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:09:37.874 21:31:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:37.874 21:31:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:37.874 21:31:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:37.874 21:31:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:37.874 nvmf hotplug test: fio failed as expected 00:09:37.874 21:31:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:09:37.874 21:31:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:09:37.874 21:31:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:09:37.874 21:31:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:38.133 21:31:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:09:38.133 21:31:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:09:38.133 21:31:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:09:38.133 21:31:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:09:38.133 21:31:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:09:38.133 21:31:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:38.133 21:31:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:09:38.133 21:31:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:38.133 21:31:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:09:38.133 21:31:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:38.133 21:31:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:38.133 rmmod nvme_tcp 00:09:38.133 rmmod nvme_fabrics 00:09:38.133 rmmod nvme_keyring 00:09:38.133 21:31:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:38.133 21:31:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:09:38.133 21:31:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:09:38.133 21:31:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 67584 ']' 00:09:38.133 21:31:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 67584 00:09:38.133 21:31:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 67584 ']' 00:09:38.133 21:31:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 67584 00:09:38.133 21:31:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:09:38.133 21:31:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:38.133 21:31:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67584 00:09:38.133 killing process with pid 67584 00:09:38.133 21:31:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:38.133 21:31:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:38.133 21:31:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67584' 00:09:38.133 21:31:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 67584 00:09:38.133 21:31:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 67584 00:09:38.392 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:38.392 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:38.392 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:38.392 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:38.392 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:38.392 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:38.392 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:38.392 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:38.392 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:38.392 00:09:38.392 real 0m18.926s 00:09:38.392 user 1m9.997s 00:09:38.392 sys 0m11.006s 00:09:38.392 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:38.392 ************************************ 00:09:38.392 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:38.392 END TEST nvmf_fio_target 00:09:38.392 ************************************ 00:09:38.392 21:31:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:38.392 21:31:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:38.392 21:31:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:38.392 21:31:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:38.392 ************************************ 00:09:38.392 START TEST nvmf_bdevio 00:09:38.392 ************************************ 00:09:38.392 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:38.651 * Looking for test storage... 00:09:38.651 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:38.651 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:38.651 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:09:38.651 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:38.651 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:38.651 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:38.651 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:38.651 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:38.651 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:38.651 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:38.651 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:38.651 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:38.651 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:38.651 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:09:38.651 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:09:38.651 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:38.651 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:38.651 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:38.651 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:38.651 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:38.651 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:38.651 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:38.651 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:38.651 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.651 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.651 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.651 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:09:38.651 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.651 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:09:38.651 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:38.651 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:38.651 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:38.651 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:38.651 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:38.651 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:38.651 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:38.651 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:38.651 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:38.651 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:38.651 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:09:38.651 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:38.651 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:38.652 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:38.652 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:38.652 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:38.652 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:38.652 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:38.652 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:38.652 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:38.652 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:38.652 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:38.652 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:38.652 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:38.652 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:38.652 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:38.652 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:38.652 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:38.652 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:38.652 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:38.652 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:38.652 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:38.652 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:38.652 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:38.652 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:38.652 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:38.652 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:38.652 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:38.652 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:38.652 Cannot find device "nvmf_tgt_br" 00:09:38.652 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@155 -- # true 00:09:38.652 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:38.652 Cannot find device "nvmf_tgt_br2" 00:09:38.652 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # true 00:09:38.652 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:38.652 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:38.652 Cannot find device "nvmf_tgt_br" 00:09:38.652 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # true 00:09:38.652 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:38.652 Cannot find device "nvmf_tgt_br2" 00:09:38.652 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # true 00:09:38.652 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:38.652 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:38.652 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:38.652 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:38.652 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:09:38.652 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:38.911 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:38.911 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:09:38.911 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:38.911 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:38.911 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:38.911 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:38.911 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:38.911 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:38.911 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:38.911 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:38.911 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:38.911 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:38.911 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:38.911 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:38.911 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:38.911 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:38.911 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:38.911 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:38.911 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:38.911 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:38.911 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:38.911 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:38.911 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:38.911 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:38.911 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:38.911 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:38.911 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:38.911 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.080 ms 00:09:38.911 00:09:38.911 --- 10.0.0.2 ping statistics --- 00:09:38.911 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:38.911 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:09:38.911 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:38.911 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:38.911 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.090 ms 00:09:38.911 00:09:38.911 --- 10.0.0.3 ping statistics --- 00:09:38.911 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:38.911 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:09:38.911 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:38.911 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:38.911 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:09:38.911 00:09:38.911 --- 10.0.0.1 ping statistics --- 00:09:38.911 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:38.911 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:09:38.911 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:38.911 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@433 -- # return 0 00:09:38.911 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:38.911 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:38.911 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:38.911 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:38.911 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:38.911 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:38.911 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:38.911 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:09:38.911 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:38.911 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:38.911 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:38.911 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=68276 00:09:38.911 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:09:38.911 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 68276 00:09:38.911 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 68276 ']' 00:09:38.911 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:38.911 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:38.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:38.911 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:38.911 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:38.911 21:31:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:39.170 [2024-07-24 21:31:23.924579] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:09:39.170 [2024-07-24 21:31:23.924679] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:39.170 [2024-07-24 21:31:24.063544] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:39.428 [2024-07-24 21:31:24.174323] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:39.428 [2024-07-24 21:31:24.174372] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:39.428 [2024-07-24 21:31:24.174382] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:39.428 [2024-07-24 21:31:24.174389] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:39.428 [2024-07-24 21:31:24.174396] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:39.428 [2024-07-24 21:31:24.174567] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:09:39.428 [2024-07-24 21:31:24.174936] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:09:39.428 [2024-07-24 21:31:24.175032] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:09:39.428 [2024-07-24 21:31:24.175036] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:39.428 [2024-07-24 21:31:24.243764] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:39.995 21:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:39.995 21:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:09:39.995 21:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:39.995 21:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:39.995 21:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:39.995 21:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:39.995 21:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:39.995 21:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.995 21:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:39.995 [2024-07-24 21:31:24.966318] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:39.995 21:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.995 21:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:39.995 21:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.995 21:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:40.254 Malloc0 00:09:40.254 21:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.254 21:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:40.254 21:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.254 21:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:40.254 21:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.254 21:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:40.254 21:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.254 21:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:40.254 21:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.255 21:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:40.255 21:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.255 21:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:40.255 [2024-07-24 21:31:25.037677] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:40.255 21:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.255 21:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:09:40.255 21:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:09:40.255 21:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:09:40.255 21:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:09:40.255 21:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:40.255 21:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:40.255 { 00:09:40.255 "params": { 00:09:40.255 "name": "Nvme$subsystem", 00:09:40.255 "trtype": "$TEST_TRANSPORT", 00:09:40.255 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:40.255 "adrfam": "ipv4", 00:09:40.255 "trsvcid": "$NVMF_PORT", 00:09:40.255 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:40.255 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:40.255 "hdgst": ${hdgst:-false}, 00:09:40.255 "ddgst": ${ddgst:-false} 00:09:40.255 }, 00:09:40.255 "method": "bdev_nvme_attach_controller" 00:09:40.255 } 00:09:40.255 EOF 00:09:40.255 )") 00:09:40.255 21:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:09:40.255 21:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:09:40.255 21:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:09:40.255 21:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:40.255 "params": { 00:09:40.255 "name": "Nvme1", 00:09:40.255 "trtype": "tcp", 00:09:40.255 "traddr": "10.0.0.2", 00:09:40.255 "adrfam": "ipv4", 00:09:40.255 "trsvcid": "4420", 00:09:40.255 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:40.255 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:40.255 "hdgst": false, 00:09:40.255 "ddgst": false 00:09:40.255 }, 00:09:40.255 "method": "bdev_nvme_attach_controller" 00:09:40.255 }' 00:09:40.255 [2024-07-24 21:31:25.099269] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:09:40.255 [2024-07-24 21:31:25.099374] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68312 ] 00:09:40.255 [2024-07-24 21:31:25.242724] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:40.513 [2024-07-24 21:31:25.352944] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:40.513 [2024-07-24 21:31:25.353101] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:40.513 [2024-07-24 21:31:25.353107] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:40.513 [2024-07-24 21:31:25.436757] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:40.773 I/O targets: 00:09:40.773 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:09:40.773 00:09:40.773 00:09:40.773 CUnit - A unit testing framework for C - Version 2.1-3 00:09:40.773 http://cunit.sourceforge.net/ 00:09:40.773 00:09:40.773 00:09:40.773 Suite: bdevio tests on: Nvme1n1 00:09:40.773 Test: blockdev write read block ...passed 00:09:40.773 Test: blockdev write zeroes read block ...passed 00:09:40.773 Test: blockdev write zeroes read no split ...passed 00:09:40.773 Test: blockdev write zeroes read split ...passed 00:09:40.773 Test: blockdev write zeroes read split partial ...passed 00:09:40.773 Test: blockdev reset ...[2024-07-24 21:31:25.599234] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:09:40.773 [2024-07-24 21:31:25.599361] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20197c0 (9): Bad file descriptor 00:09:40.773 [2024-07-24 21:31:25.614224] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:40.773 passed 00:09:40.773 Test: blockdev write read 8 blocks ...passed 00:09:40.773 Test: blockdev write read size > 128k ...passed 00:09:40.773 Test: blockdev write read invalid size ...passed 00:09:40.773 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:40.773 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:40.773 Test: blockdev write read max offset ...passed 00:09:40.773 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:40.773 Test: blockdev writev readv 8 blocks ...passed 00:09:40.773 Test: blockdev writev readv 30 x 1block ...passed 00:09:40.773 Test: blockdev writev readv block ...passed 00:09:40.773 Test: blockdev writev readv size > 128k ...passed 00:09:40.773 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:40.773 Test: blockdev comparev and writev ...[2024-07-24 21:31:25.621826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:40.773 [2024-07-24 21:31:25.621910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:09:40.773 [2024-07-24 21:31:25.621934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:40.773 [2024-07-24 21:31:25.621945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:09:40.773 [2024-07-24 21:31:25.622322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:40.773 [2024-07-24 21:31:25.622347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:09:40.773 [2024-07-24 21:31:25.622364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:40.773 [2024-07-24 21:31:25.622374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:09:40.773 [2024-07-24 21:31:25.622841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:40.773 [2024-07-24 21:31:25.622885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:09:40.773 [2024-07-24 21:31:25.622901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:40.773 [2024-07-24 21:31:25.622910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:09:40.773 [2024-07-24 21:31:25.623284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:40.773 [2024-07-24 21:31:25.623311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:09:40.773 [2024-07-24 21:31:25.623328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:40.773 [2024-07-24 21:31:25.623338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:09:40.773 passed 00:09:40.773 Test: blockdev nvme passthru rw ...passed 00:09:40.773 Test: blockdev nvme passthru vendor specific ...[2024-07-24 21:31:25.624246] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:40.773 [2024-07-24 21:31:25.624285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:09:40.773 [2024-07-24 21:31:25.624405] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:40.773 [2024-07-24 21:31:25.624425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:09:40.773 [2024-07-24 21:31:25.624537] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:40.773 [2024-07-24 21:31:25.624557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:09:40.773 passed 00:09:40.773 Test: blockdev nvme admin passthru ...[2024-07-24 21:31:25.624654] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:40.773 [2024-07-24 21:31:25.624670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:09:40.773 passed 00:09:40.773 Test: blockdev copy ...passed 00:09:40.773 00:09:40.773 Run Summary: Type Total Ran Passed Failed Inactive 00:09:40.773 suites 1 1 n/a 0 0 00:09:40.773 tests 23 23 23 0 0 00:09:40.773 asserts 152 152 152 0 n/a 00:09:40.773 00:09:40.773 Elapsed time = 0.148 seconds 00:09:41.032 21:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:41.032 21:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.032 21:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:41.032 21:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.032 21:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:09:41.032 21:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:09:41.032 21:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:41.032 21:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:09:41.032 21:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:41.032 21:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:09:41.032 21:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:41.032 21:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:41.032 rmmod nvme_tcp 00:09:41.032 rmmod nvme_fabrics 00:09:41.032 rmmod nvme_keyring 00:09:41.032 21:31:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:41.032 21:31:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:09:41.032 21:31:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:09:41.032 21:31:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 68276 ']' 00:09:41.032 21:31:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 68276 00:09:41.032 21:31:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 68276 ']' 00:09:41.032 21:31:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 68276 00:09:41.032 21:31:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:09:41.032 21:31:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:41.032 21:31:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 68276 00:09:41.291 21:31:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:09:41.291 21:31:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:09:41.291 killing process with pid 68276 00:09:41.291 21:31:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 68276' 00:09:41.291 21:31:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 68276 00:09:41.291 21:31:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 68276 00:09:41.550 21:31:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:41.550 21:31:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:41.550 21:31:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:41.550 21:31:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:41.550 21:31:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:41.550 21:31:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:41.550 21:31:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:41.550 21:31:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:41.550 21:31:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:41.550 00:09:41.550 real 0m2.997s 00:09:41.550 user 0m10.157s 00:09:41.550 sys 0m0.842s 00:09:41.550 21:31:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:41.550 21:31:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:41.550 ************************************ 00:09:41.550 END TEST nvmf_bdevio 00:09:41.550 ************************************ 00:09:41.550 21:31:26 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:09:41.550 00:09:41.550 real 2m33.669s 00:09:41.550 user 6m48.905s 00:09:41.550 sys 0m54.133s 00:09:41.550 21:31:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:41.550 21:31:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:41.550 ************************************ 00:09:41.550 END TEST nvmf_target_core 00:09:41.550 ************************************ 00:09:41.550 21:31:26 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:41.550 21:31:26 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:41.550 21:31:26 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:41.550 21:31:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:41.550 ************************************ 00:09:41.550 START TEST nvmf_target_extra 00:09:41.550 ************************************ 00:09:41.550 21:31:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:41.550 * Looking for test storage... 00:09:41.550 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:09:41.550 21:31:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:41.550 21:31:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:09:41.551 21:31:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:41.551 21:31:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:41.551 21:31:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:41.551 21:31:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:41.551 21:31:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:41.551 21:31:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:41.551 21:31:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:41.551 21:31:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:41.551 21:31:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:41.811 21:31:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:41.811 21:31:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:09:41.811 21:31:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:09:41.811 21:31:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:41.811 21:31:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:41.811 21:31:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:41.811 21:31:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:41.811 21:31:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:41.811 21:31:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:41.811 21:31:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:41.811 21:31:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:41.811 21:31:26 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:41.811 21:31:26 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:41.811 21:31:26 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:41.811 21:31:26 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:09:41.811 21:31:26 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:41.811 21:31:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@47 -- # : 0 00:09:41.811 21:31:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:41.811 21:31:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:41.811 21:31:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:41.811 21:31:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:41.811 21:31:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:41.811 21:31:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:41.811 21:31:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:41.811 21:31:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:41.811 21:31:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:09:41.811 21:31:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:09:41.811 21:31:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 1 -eq 0 ]] 00:09:41.811 21:31:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:09:41.811 21:31:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:41.811 21:31:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:41.811 21:31:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:41.811 ************************************ 00:09:41.811 START TEST nvmf_auth_target 00:09:41.811 ************************************ 00:09:41.811 21:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:09:41.811 * Looking for test storage... 00:09:41.811 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:41.811 21:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:41.811 21:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:09:41.811 21:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:41.811 21:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:41.811 21:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:41.811 21:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:41.811 21:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:41.811 21:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:41.811 21:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:41.811 21:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:41.811 21:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:41.811 21:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:41.811 21:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:09:41.811 21:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:09:41.811 21:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:41.811 21:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:41.811 21:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:41.811 21:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:41.811 21:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:41.811 21:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:41.811 21:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:41.811 21:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:41.811 21:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:41.811 21:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:41.812 21:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:41.812 21:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:09:41.812 21:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:41.812 21:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:09:41.812 21:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:41.812 21:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:41.812 21:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:41.812 21:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:41.812 21:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:41.812 21:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:41.812 21:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:41.812 21:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:41.812 21:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:09:41.812 21:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:09:41.812 21:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:09:41.812 21:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:09:41.812 21:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:09:41.812 21:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:09:41.812 21:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:09:41.812 21:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:09:41.812 21:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:41.812 21:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:41.812 21:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:41.812 21:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:41.812 21:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:41.812 21:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:41.812 21:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:41.812 21:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:41.812 21:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:41.812 21:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:41.812 21:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:41.812 21:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:41.812 21:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:41.812 21:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:41.812 21:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:41.812 21:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:41.812 21:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:41.812 21:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:41.812 21:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:41.812 21:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:41.812 21:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:41.812 21:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:41.812 21:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:41.812 21:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:41.812 21:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:41.812 21:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:41.812 21:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:41.812 21:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:41.812 Cannot find device "nvmf_tgt_br" 00:09:41.812 21:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@155 -- # true 00:09:41.812 21:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:41.812 Cannot find device "nvmf_tgt_br2" 00:09:41.812 21:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # true 00:09:41.812 21:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:41.812 21:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:41.812 Cannot find device "nvmf_tgt_br" 00:09:41.812 21:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # true 00:09:41.812 21:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:41.812 Cannot find device "nvmf_tgt_br2" 00:09:41.812 21:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # true 00:09:41.812 21:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:41.812 21:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:42.072 21:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:42.072 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:42.072 21:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:09:42.072 21:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:42.072 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:42.072 21:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:09:42.072 21:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:42.072 21:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:42.072 21:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:42.072 21:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:42.072 21:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:42.072 21:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:42.072 21:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:42.072 21:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:42.072 21:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:42.072 21:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:42.072 21:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:42.072 21:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:42.072 21:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:42.072 21:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:42.072 21:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:42.072 21:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:42.072 21:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:42.072 21:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:42.072 21:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:42.072 21:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:42.072 21:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:42.072 21:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:42.072 21:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:42.072 21:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:42.072 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:42.072 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.098 ms 00:09:42.072 00:09:42.072 --- 10.0.0.2 ping statistics --- 00:09:42.072 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:42.072 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:09:42.072 21:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:42.072 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:42.072 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:09:42.072 00:09:42.072 --- 10.0.0.3 ping statistics --- 00:09:42.072 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:42.072 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:09:42.072 21:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:42.072 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:42.072 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:09:42.072 00:09:42.072 --- 10.0.0.1 ping statistics --- 00:09:42.072 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:42.072 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:09:42.072 21:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:42.072 21:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@433 -- # return 0 00:09:42.072 21:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:42.072 21:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:42.072 21:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:42.072 21:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:42.072 21:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:42.072 21:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:42.072 21:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:42.072 21:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:09:42.072 21:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:42.072 21:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:42.072 21:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:42.072 21:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=68538 00:09:42.072 21:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 68538 00:09:42.072 21:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:09:42.072 21:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 68538 ']' 00:09:42.072 21:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:42.072 21:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:42.072 21:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:42.072 21:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:42.072 21:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:43.450 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:43.450 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:09:43.450 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:43.450 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:43.450 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:43.450 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:43.450 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=68570 00:09:43.450 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:09:43.450 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:09:43.450 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:09:43.450 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:09:43.450 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:43.450 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:09:43.450 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:09:43.450 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:09:43.450 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:09:43.450 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=613fbd51ef65e91268340f0bd9bc57b6f16388665236d665 00:09:43.450 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:09:43.450 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.T5w 00:09:43.450 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 613fbd51ef65e91268340f0bd9bc57b6f16388665236d665 0 00:09:43.450 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 613fbd51ef65e91268340f0bd9bc57b6f16388665236d665 0 00:09:43.450 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:09:43.450 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:09:43.450 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=613fbd51ef65e91268340f0bd9bc57b6f16388665236d665 00:09:43.450 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:09:43.450 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:09:43.450 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.T5w 00:09:43.450 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.T5w 00:09:43.450 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.T5w 00:09:43.450 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:09:43.450 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:09:43.450 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:43.450 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:09:43.450 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:09:43.450 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:09:43.450 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:09:43.450 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=324bf2463283d06d89277792b5fb51747b144029d479055a1cacc294ed855b52 00:09:43.450 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:09:43.450 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.vt5 00:09:43.450 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 324bf2463283d06d89277792b5fb51747b144029d479055a1cacc294ed855b52 3 00:09:43.450 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 324bf2463283d06d89277792b5fb51747b144029d479055a1cacc294ed855b52 3 00:09:43.450 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:09:43.450 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:09:43.450 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=324bf2463283d06d89277792b5fb51747b144029d479055a1cacc294ed855b52 00:09:43.450 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:09:43.450 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:09:43.450 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.vt5 00:09:43.450 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.vt5 00:09:43.450 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.vt5 00:09:43.450 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:09:43.450 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:09:43.450 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:43.450 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:09:43.450 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:09:43.450 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:09:43.450 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:09:43.450 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=ff235ff2c72d489a2330b185caffb29d 00:09:43.450 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:09:43.450 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.4RS 00:09:43.450 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key ff235ff2c72d489a2330b185caffb29d 1 00:09:43.450 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 ff235ff2c72d489a2330b185caffb29d 1 00:09:43.450 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:09:43.450 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:09:43.450 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=ff235ff2c72d489a2330b185caffb29d 00:09:43.450 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:09:43.450 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:09:43.450 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.4RS 00:09:43.450 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.4RS 00:09:43.450 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.4RS 00:09:43.450 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:09:43.450 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:09:43.450 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:43.450 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:09:43.450 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:09:43.450 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:09:43.450 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:09:43.450 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=d29ae624990d11bde47195af86069d73e070fdb50eda458b 00:09:43.450 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:09:43.450 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.LM7 00:09:43.450 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key d29ae624990d11bde47195af86069d73e070fdb50eda458b 2 00:09:43.450 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 d29ae624990d11bde47195af86069d73e070fdb50eda458b 2 00:09:43.450 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:09:43.450 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:09:43.450 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=d29ae624990d11bde47195af86069d73e070fdb50eda458b 00:09:43.450 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:09:43.450 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:09:43.450 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.LM7 00:09:43.450 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.LM7 00:09:43.450 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.LM7 00:09:43.450 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:09:43.450 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:09:43.450 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:43.450 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:09:43.450 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:09:43.450 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:09:43.451 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:09:43.451 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=c233d8da2d1b820246f63685c3c8bf5985161956484af3ba 00:09:43.451 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:09:43.451 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.DjO 00:09:43.451 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key c233d8da2d1b820246f63685c3c8bf5985161956484af3ba 2 00:09:43.451 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 c233d8da2d1b820246f63685c3c8bf5985161956484af3ba 2 00:09:43.451 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:09:43.451 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:09:43.451 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=c233d8da2d1b820246f63685c3c8bf5985161956484af3ba 00:09:43.451 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:09:43.451 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:09:43.451 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.DjO 00:09:43.451 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.DjO 00:09:43.451 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.DjO 00:09:43.451 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:09:43.451 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:09:43.451 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:43.451 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:09:43.451 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:09:43.451 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:09:43.709 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:09:43.709 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=8060a351076efa2f2beec905c41110aa 00:09:43.709 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:09:43.709 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.Vrs 00:09:43.709 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 8060a351076efa2f2beec905c41110aa 1 00:09:43.709 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 8060a351076efa2f2beec905c41110aa 1 00:09:43.709 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:09:43.709 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:09:43.709 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=8060a351076efa2f2beec905c41110aa 00:09:43.709 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:09:43.709 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:09:43.709 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.Vrs 00:09:43.709 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.Vrs 00:09:43.709 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.Vrs 00:09:43.709 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:09:43.709 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:09:43.709 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:43.709 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:09:43.709 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:09:43.709 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:09:43.709 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:09:43.709 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=cb5553b1388a7ceda2f88b0e786b3fcbeb1fc522a63cca285d3f9aa39ac8aa6c 00:09:43.709 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:09:43.709 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.n2B 00:09:43.710 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key cb5553b1388a7ceda2f88b0e786b3fcbeb1fc522a63cca285d3f9aa39ac8aa6c 3 00:09:43.710 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 cb5553b1388a7ceda2f88b0e786b3fcbeb1fc522a63cca285d3f9aa39ac8aa6c 3 00:09:43.710 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:09:43.710 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:09:43.710 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=cb5553b1388a7ceda2f88b0e786b3fcbeb1fc522a63cca285d3f9aa39ac8aa6c 00:09:43.710 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:09:43.710 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:09:43.710 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.n2B 00:09:43.710 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.n2B 00:09:43.710 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.n2B 00:09:43.710 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:09:43.710 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 68538 00:09:43.710 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 68538 ']' 00:09:43.710 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:43.710 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:43.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:43.710 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:43.710 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:43.710 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:43.968 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:43.968 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:09:43.968 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 68570 /var/tmp/host.sock 00:09:43.968 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 68570 ']' 00:09:43.968 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:09:43.968 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:43.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:09:43.968 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:09:43.968 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:43.968 21:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:44.226 21:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:44.226 21:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:09:44.226 21:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:09:44.226 21:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.226 21:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:44.226 21:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.226 21:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:09:44.226 21:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.T5w 00:09:44.226 21:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.226 21:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:44.226 21:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.226 21:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.T5w 00:09:44.226 21:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.T5w 00:09:44.484 21:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.vt5 ]] 00:09:44.484 21:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.vt5 00:09:44.484 21:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.484 21:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:44.484 21:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.484 21:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.vt5 00:09:44.484 21:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.vt5 00:09:44.743 21:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:09:44.743 21:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.4RS 00:09:44.743 21:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.743 21:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:44.743 21:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.743 21:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.4RS 00:09:44.743 21:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.4RS 00:09:45.002 21:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.LM7 ]] 00:09:45.002 21:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.LM7 00:09:45.002 21:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.002 21:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:45.002 21:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.002 21:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.LM7 00:09:45.002 21:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.LM7 00:09:45.002 21:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:09:45.002 21:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.DjO 00:09:45.002 21:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.002 21:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:45.002 21:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.002 21:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.DjO 00:09:45.002 21:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.DjO 00:09:45.260 21:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.Vrs ]] 00:09:45.260 21:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Vrs 00:09:45.260 21:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.260 21:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:45.260 21:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.260 21:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Vrs 00:09:45.260 21:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Vrs 00:09:45.517 21:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:09:45.517 21:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.n2B 00:09:45.517 21:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.517 21:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:45.517 21:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.517 21:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.n2B 00:09:45.517 21:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.n2B 00:09:45.775 21:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:09:45.775 21:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:09:45.775 21:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:09:45.775 21:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:09:45.775 21:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:45.775 21:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:46.034 21:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:09:46.034 21:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:09:46.034 21:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:09:46.034 21:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:09:46.034 21:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:09:46.034 21:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:46.034 21:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:46.034 21:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.034 21:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:46.034 21:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.034 21:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:46.034 21:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:46.292 00:09:46.292 21:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:09:46.292 21:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:09:46.292 21:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:46.292 21:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:46.292 21:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:46.292 21:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.292 21:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:46.292 21:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.292 21:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:09:46.292 { 00:09:46.292 "cntlid": 1, 00:09:46.292 "qid": 0, 00:09:46.292 "state": "enabled", 00:09:46.292 "thread": "nvmf_tgt_poll_group_000", 00:09:46.292 "listen_address": { 00:09:46.292 "trtype": "TCP", 00:09:46.292 "adrfam": "IPv4", 00:09:46.292 "traddr": "10.0.0.2", 00:09:46.292 "trsvcid": "4420" 00:09:46.292 }, 00:09:46.292 "peer_address": { 00:09:46.292 "trtype": "TCP", 00:09:46.292 "adrfam": "IPv4", 00:09:46.292 "traddr": "10.0.0.1", 00:09:46.292 "trsvcid": "54208" 00:09:46.292 }, 00:09:46.292 "auth": { 00:09:46.292 "state": "completed", 00:09:46.292 "digest": "sha256", 00:09:46.292 "dhgroup": "null" 00:09:46.292 } 00:09:46.292 } 00:09:46.292 ]' 00:09:46.292 21:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:09:46.551 21:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:46.551 21:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:09:46.551 21:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:09:46.551 21:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:09:46.551 21:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:46.551 21:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:46.551 21:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:46.809 21:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --hostid 987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-secret DHHC-1:00:NjEzZmJkNTFlZjY1ZTkxMjY4MzQwZjBiZDliYzU3YjZmMTYzODg2NjUyMzZkNjY1uLsNhA==: --dhchap-ctrl-secret DHHC-1:03:MzI0YmYyNDYzMjgzZDA2ZDg5Mjc3NzkyYjVmYjUxNzQ3YjE0NDAyOWQ0NzkwNTVhMWNhY2MyOTRlZDg1NWI1Mr1ssro=: 00:09:50.996 21:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:50.996 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:50.997 21:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:09:50.997 21:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.997 21:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:50.997 21:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.997 21:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:09:50.997 21:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:50.997 21:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:50.997 21:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:09:50.997 21:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:09:50.997 21:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:09:50.997 21:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:09:50.997 21:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:09:50.997 21:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:50.997 21:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:50.997 21:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.997 21:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:50.997 21:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.997 21:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:50.997 21:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:50.997 00:09:50.997 21:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:09:50.997 21:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:09:50.997 21:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:51.255 21:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:51.255 21:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:51.255 21:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.255 21:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:51.255 21:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.255 21:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:09:51.255 { 00:09:51.255 "cntlid": 3, 00:09:51.255 "qid": 0, 00:09:51.255 "state": "enabled", 00:09:51.255 "thread": "nvmf_tgt_poll_group_000", 00:09:51.255 "listen_address": { 00:09:51.255 "trtype": "TCP", 00:09:51.255 "adrfam": "IPv4", 00:09:51.255 "traddr": "10.0.0.2", 00:09:51.255 "trsvcid": "4420" 00:09:51.255 }, 00:09:51.255 "peer_address": { 00:09:51.255 "trtype": "TCP", 00:09:51.255 "adrfam": "IPv4", 00:09:51.255 "traddr": "10.0.0.1", 00:09:51.255 "trsvcid": "54250" 00:09:51.255 }, 00:09:51.255 "auth": { 00:09:51.255 "state": "completed", 00:09:51.255 "digest": "sha256", 00:09:51.255 "dhgroup": "null" 00:09:51.255 } 00:09:51.255 } 00:09:51.255 ]' 00:09:51.255 21:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:09:51.256 21:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:51.256 21:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:09:51.256 21:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:09:51.256 21:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:09:51.514 21:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:51.514 21:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:51.514 21:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:51.515 21:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --hostid 987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-secret DHHC-1:01:ZmYyMzVmZjJjNzJkNDg5YTIzMzBiMTg1Y2FmZmIyOWQN2BnV: --dhchap-ctrl-secret DHHC-1:02:ZDI5YWU2MjQ5OTBkMTFiZGU0NzE5NWFmODYwNjlkNzNlMDcwZmRiNTBlZGE0NThi0FkQ4A==: 00:09:52.082 21:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:52.082 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:52.082 21:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:09:52.082 21:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.082 21:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:52.082 21:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.082 21:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:09:52.082 21:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:52.082 21:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:52.340 21:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:09:52.340 21:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:09:52.341 21:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:09:52.341 21:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:09:52.341 21:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:09:52.341 21:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:52.341 21:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:52.341 21:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.341 21:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:52.341 21:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.341 21:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:52.341 21:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:52.599 00:09:52.599 21:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:09:52.599 21:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:52.599 21:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:09:52.858 21:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:52.858 21:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:52.858 21:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.858 21:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:52.858 21:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.858 21:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:09:52.858 { 00:09:52.858 "cntlid": 5, 00:09:52.858 "qid": 0, 00:09:52.858 "state": "enabled", 00:09:52.858 "thread": "nvmf_tgt_poll_group_000", 00:09:52.858 "listen_address": { 00:09:52.858 "trtype": "TCP", 00:09:52.858 "adrfam": "IPv4", 00:09:52.858 "traddr": "10.0.0.2", 00:09:52.858 "trsvcid": "4420" 00:09:52.858 }, 00:09:52.858 "peer_address": { 00:09:52.858 "trtype": "TCP", 00:09:52.858 "adrfam": "IPv4", 00:09:52.858 "traddr": "10.0.0.1", 00:09:52.858 "trsvcid": "54268" 00:09:52.858 }, 00:09:52.858 "auth": { 00:09:52.858 "state": "completed", 00:09:52.858 "digest": "sha256", 00:09:52.858 "dhgroup": "null" 00:09:52.858 } 00:09:52.858 } 00:09:52.858 ]' 00:09:52.858 21:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:09:52.858 21:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:52.858 21:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:09:52.858 21:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:09:52.858 21:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:09:53.116 21:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:53.116 21:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:53.116 21:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:53.375 21:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --hostid 987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-secret DHHC-1:02:YzIzM2Q4ZGEyZDFiODIwMjQ2ZjYzNjg1YzNjOGJmNTk4NTE2MTk1NjQ4NGFmM2Jhaf7tFQ==: --dhchap-ctrl-secret DHHC-1:01:ODA2MGEzNTEwNzZlZmEyZjJiZWVjOTA1YzQxMTEwYWFgE6zS: 00:09:53.941 21:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:53.941 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:53.941 21:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:09:53.941 21:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.941 21:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:53.941 21:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.941 21:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:09:53.941 21:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:53.941 21:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:54.199 21:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:09:54.199 21:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:09:54.199 21:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:09:54.199 21:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:09:54.199 21:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:09:54.199 21:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:54.199 21:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-key key3 00:09:54.199 21:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.199 21:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:54.199 21:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.199 21:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:09:54.200 21:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:09:54.457 00:09:54.457 21:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:09:54.457 21:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:09:54.457 21:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:54.716 21:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:54.716 21:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:54.716 21:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.716 21:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:54.716 21:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.716 21:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:09:54.716 { 00:09:54.716 "cntlid": 7, 00:09:54.716 "qid": 0, 00:09:54.716 "state": "enabled", 00:09:54.716 "thread": "nvmf_tgt_poll_group_000", 00:09:54.716 "listen_address": { 00:09:54.716 "trtype": "TCP", 00:09:54.716 "adrfam": "IPv4", 00:09:54.716 "traddr": "10.0.0.2", 00:09:54.716 "trsvcid": "4420" 00:09:54.716 }, 00:09:54.716 "peer_address": { 00:09:54.716 "trtype": "TCP", 00:09:54.716 "adrfam": "IPv4", 00:09:54.716 "traddr": "10.0.0.1", 00:09:54.716 "trsvcid": "54290" 00:09:54.716 }, 00:09:54.716 "auth": { 00:09:54.716 "state": "completed", 00:09:54.716 "digest": "sha256", 00:09:54.716 "dhgroup": "null" 00:09:54.716 } 00:09:54.716 } 00:09:54.716 ]' 00:09:54.716 21:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:09:54.716 21:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:54.716 21:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:09:54.716 21:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:09:54.716 21:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:09:54.716 21:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:54.716 21:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:54.716 21:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:54.974 21:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --hostid 987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-secret DHHC-1:03:Y2I1NTUzYjEzODhhN2NlZGEyZjg4YjBlNzg2YjNmY2JlYjFmYzUyMmE2M2NjYTI4NWQzZjlhYTM5YWM4YWE2Y5RqSzs=: 00:09:55.540 21:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:55.540 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:55.540 21:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:09:55.540 21:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.540 21:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:55.540 21:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.540 21:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:09:55.540 21:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:09:55.540 21:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:09:55.540 21:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:09:55.798 21:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:09:55.798 21:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:09:55.798 21:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:09:55.798 21:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:09:55.798 21:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:09:55.798 21:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:55.798 21:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:55.798 21:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.798 21:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:55.798 21:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.798 21:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:55.798 21:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:56.055 00:09:56.055 21:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:09:56.055 21:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:09:56.055 21:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:56.313 21:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:56.313 21:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:56.313 21:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.313 21:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:56.313 21:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.313 21:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:09:56.313 { 00:09:56.313 "cntlid": 9, 00:09:56.313 "qid": 0, 00:09:56.313 "state": "enabled", 00:09:56.313 "thread": "nvmf_tgt_poll_group_000", 00:09:56.313 "listen_address": { 00:09:56.313 "trtype": "TCP", 00:09:56.313 "adrfam": "IPv4", 00:09:56.313 "traddr": "10.0.0.2", 00:09:56.313 "trsvcid": "4420" 00:09:56.313 }, 00:09:56.313 "peer_address": { 00:09:56.313 "trtype": "TCP", 00:09:56.313 "adrfam": "IPv4", 00:09:56.313 "traddr": "10.0.0.1", 00:09:56.313 "trsvcid": "53592" 00:09:56.313 }, 00:09:56.313 "auth": { 00:09:56.313 "state": "completed", 00:09:56.313 "digest": "sha256", 00:09:56.313 "dhgroup": "ffdhe2048" 00:09:56.313 } 00:09:56.313 } 00:09:56.313 ]' 00:09:56.313 21:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:09:56.313 21:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:56.313 21:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:09:56.313 21:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:09:56.313 21:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:09:56.571 21:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:56.571 21:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:56.571 21:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:56.828 21:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --hostid 987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-secret DHHC-1:00:NjEzZmJkNTFlZjY1ZTkxMjY4MzQwZjBiZDliYzU3YjZmMTYzODg2NjUyMzZkNjY1uLsNhA==: --dhchap-ctrl-secret DHHC-1:03:MzI0YmYyNDYzMjgzZDA2ZDg5Mjc3NzkyYjVmYjUxNzQ3YjE0NDAyOWQ0NzkwNTVhMWNhY2MyOTRlZDg1NWI1Mr1ssro=: 00:09:57.394 21:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:57.394 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:57.394 21:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:09:57.394 21:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.394 21:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:57.394 21:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.394 21:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:09:57.395 21:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:09:57.395 21:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:09:57.653 21:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:09:57.653 21:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:09:57.653 21:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:09:57.653 21:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:09:57.653 21:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:09:57.653 21:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:57.653 21:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:57.653 21:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.653 21:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:57.653 21:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.653 21:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:57.653 21:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:57.911 00:09:57.911 21:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:09:57.911 21:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:09:57.911 21:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:58.170 21:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:58.170 21:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:58.170 21:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.170 21:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:58.170 21:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.170 21:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:09:58.170 { 00:09:58.170 "cntlid": 11, 00:09:58.170 "qid": 0, 00:09:58.170 "state": "enabled", 00:09:58.170 "thread": "nvmf_tgt_poll_group_000", 00:09:58.170 "listen_address": { 00:09:58.170 "trtype": "TCP", 00:09:58.170 "adrfam": "IPv4", 00:09:58.170 "traddr": "10.0.0.2", 00:09:58.170 "trsvcid": "4420" 00:09:58.170 }, 00:09:58.170 "peer_address": { 00:09:58.170 "trtype": "TCP", 00:09:58.170 "adrfam": "IPv4", 00:09:58.170 "traddr": "10.0.0.1", 00:09:58.170 "trsvcid": "53622" 00:09:58.170 }, 00:09:58.170 "auth": { 00:09:58.170 "state": "completed", 00:09:58.170 "digest": "sha256", 00:09:58.170 "dhgroup": "ffdhe2048" 00:09:58.170 } 00:09:58.170 } 00:09:58.170 ]' 00:09:58.170 21:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:09:58.170 21:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:58.170 21:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:09:58.170 21:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:09:58.170 21:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:09:58.170 21:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:58.170 21:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:58.170 21:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:58.428 21:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --hostid 987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-secret DHHC-1:01:ZmYyMzVmZjJjNzJkNDg5YTIzMzBiMTg1Y2FmZmIyOWQN2BnV: --dhchap-ctrl-secret DHHC-1:02:ZDI5YWU2MjQ5OTBkMTFiZGU0NzE5NWFmODYwNjlkNzNlMDcwZmRiNTBlZGE0NThi0FkQ4A==: 00:09:58.993 21:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:58.993 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:58.993 21:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:09:58.993 21:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.993 21:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:58.993 21:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.993 21:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:09:58.993 21:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:09:58.993 21:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:09:59.250 21:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:09:59.250 21:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:09:59.250 21:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:09:59.250 21:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:09:59.250 21:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:09:59.250 21:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:59.250 21:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:59.250 21:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.251 21:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:59.251 21:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.251 21:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:59.251 21:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:59.509 00:09:59.509 21:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:09:59.509 21:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:09:59.509 21:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:59.768 21:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:59.768 21:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:59.768 21:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.768 21:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:59.768 21:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.768 21:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:09:59.768 { 00:09:59.768 "cntlid": 13, 00:09:59.768 "qid": 0, 00:09:59.768 "state": "enabled", 00:09:59.768 "thread": "nvmf_tgt_poll_group_000", 00:09:59.768 "listen_address": { 00:09:59.768 "trtype": "TCP", 00:09:59.768 "adrfam": "IPv4", 00:09:59.768 "traddr": "10.0.0.2", 00:09:59.768 "trsvcid": "4420" 00:09:59.768 }, 00:09:59.768 "peer_address": { 00:09:59.768 "trtype": "TCP", 00:09:59.768 "adrfam": "IPv4", 00:09:59.768 "traddr": "10.0.0.1", 00:09:59.768 "trsvcid": "53654" 00:09:59.768 }, 00:09:59.768 "auth": { 00:09:59.768 "state": "completed", 00:09:59.768 "digest": "sha256", 00:09:59.768 "dhgroup": "ffdhe2048" 00:09:59.768 } 00:09:59.768 } 00:09:59.768 ]' 00:09:59.768 21:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:09:59.768 21:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:59.768 21:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:00.026 21:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:00.026 21:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:00.026 21:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:00.026 21:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:00.026 21:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:00.284 21:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --hostid 987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-secret DHHC-1:02:YzIzM2Q4ZGEyZDFiODIwMjQ2ZjYzNjg1YzNjOGJmNTk4NTE2MTk1NjQ4NGFmM2Jhaf7tFQ==: --dhchap-ctrl-secret DHHC-1:01:ODA2MGEzNTEwNzZlZmEyZjJiZWVjOTA1YzQxMTEwYWFgE6zS: 00:10:00.850 21:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:00.850 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:00.850 21:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:10:00.850 21:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.850 21:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:00.850 21:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.850 21:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:00.850 21:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:00.850 21:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:01.108 21:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:10:01.108 21:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:01.108 21:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:01.108 21:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:10:01.108 21:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:10:01.108 21:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:01.109 21:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-key key3 00:10:01.109 21:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.109 21:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:01.109 21:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.109 21:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:01.109 21:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:01.367 00:10:01.367 21:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:01.367 21:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:01.367 21:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:01.625 21:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:01.625 21:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:01.625 21:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.625 21:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:01.625 21:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.625 21:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:01.625 { 00:10:01.625 "cntlid": 15, 00:10:01.625 "qid": 0, 00:10:01.625 "state": "enabled", 00:10:01.625 "thread": "nvmf_tgt_poll_group_000", 00:10:01.625 "listen_address": { 00:10:01.625 "trtype": "TCP", 00:10:01.625 "adrfam": "IPv4", 00:10:01.625 "traddr": "10.0.0.2", 00:10:01.625 "trsvcid": "4420" 00:10:01.625 }, 00:10:01.625 "peer_address": { 00:10:01.625 "trtype": "TCP", 00:10:01.625 "adrfam": "IPv4", 00:10:01.625 "traddr": "10.0.0.1", 00:10:01.625 "trsvcid": "53670" 00:10:01.625 }, 00:10:01.625 "auth": { 00:10:01.625 "state": "completed", 00:10:01.625 "digest": "sha256", 00:10:01.625 "dhgroup": "ffdhe2048" 00:10:01.625 } 00:10:01.625 } 00:10:01.625 ]' 00:10:01.625 21:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:01.625 21:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:01.625 21:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:01.625 21:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:01.625 21:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:01.625 21:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:01.625 21:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:01.625 21:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:01.883 21:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --hostid 987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-secret DHHC-1:03:Y2I1NTUzYjEzODhhN2NlZGEyZjg4YjBlNzg2YjNmY2JlYjFmYzUyMmE2M2NjYTI4NWQzZjlhYTM5YWM4YWE2Y5RqSzs=: 00:10:02.489 21:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:02.489 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:02.489 21:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:10:02.489 21:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.489 21:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:02.489 21:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.489 21:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:10:02.489 21:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:02.489 21:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:02.489 21:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:02.748 21:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:10:02.748 21:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:02.748 21:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:02.748 21:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:10:02.748 21:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:10:02.748 21:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:02.748 21:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:02.748 21:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.748 21:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:02.748 21:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.748 21:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:02.748 21:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:03.006 00:10:03.006 21:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:03.006 21:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:03.006 21:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:03.265 21:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:03.265 21:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:03.265 21:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.265 21:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:03.265 21:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.265 21:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:03.265 { 00:10:03.265 "cntlid": 17, 00:10:03.265 "qid": 0, 00:10:03.265 "state": "enabled", 00:10:03.265 "thread": "nvmf_tgt_poll_group_000", 00:10:03.265 "listen_address": { 00:10:03.265 "trtype": "TCP", 00:10:03.265 "adrfam": "IPv4", 00:10:03.265 "traddr": "10.0.0.2", 00:10:03.265 "trsvcid": "4420" 00:10:03.265 }, 00:10:03.265 "peer_address": { 00:10:03.265 "trtype": "TCP", 00:10:03.265 "adrfam": "IPv4", 00:10:03.265 "traddr": "10.0.0.1", 00:10:03.265 "trsvcid": "53686" 00:10:03.265 }, 00:10:03.265 "auth": { 00:10:03.265 "state": "completed", 00:10:03.265 "digest": "sha256", 00:10:03.265 "dhgroup": "ffdhe3072" 00:10:03.265 } 00:10:03.265 } 00:10:03.265 ]' 00:10:03.265 21:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:03.265 21:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:03.265 21:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:03.265 21:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:03.265 21:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:03.265 21:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:03.265 21:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:03.265 21:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:03.524 21:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --hostid 987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-secret DHHC-1:00:NjEzZmJkNTFlZjY1ZTkxMjY4MzQwZjBiZDliYzU3YjZmMTYzODg2NjUyMzZkNjY1uLsNhA==: --dhchap-ctrl-secret DHHC-1:03:MzI0YmYyNDYzMjgzZDA2ZDg5Mjc3NzkyYjVmYjUxNzQ3YjE0NDAyOWQ0NzkwNTVhMWNhY2MyOTRlZDg1NWI1Mr1ssro=: 00:10:04.091 21:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:04.091 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:04.091 21:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:10:04.091 21:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.091 21:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:04.091 21:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.091 21:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:04.091 21:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:04.091 21:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:04.350 21:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:10:04.350 21:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:04.350 21:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:04.350 21:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:10:04.350 21:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:10:04.350 21:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:04.350 21:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:04.350 21:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.350 21:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:04.350 21:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.350 21:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:04.350 21:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:04.608 00:10:04.608 21:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:04.608 21:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:04.608 21:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:04.867 21:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:04.867 21:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:04.867 21:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.867 21:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:04.867 21:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.867 21:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:04.867 { 00:10:04.867 "cntlid": 19, 00:10:04.867 "qid": 0, 00:10:04.867 "state": "enabled", 00:10:04.867 "thread": "nvmf_tgt_poll_group_000", 00:10:04.867 "listen_address": { 00:10:04.867 "trtype": "TCP", 00:10:04.867 "adrfam": "IPv4", 00:10:04.867 "traddr": "10.0.0.2", 00:10:04.867 "trsvcid": "4420" 00:10:04.867 }, 00:10:04.867 "peer_address": { 00:10:04.867 "trtype": "TCP", 00:10:04.867 "adrfam": "IPv4", 00:10:04.867 "traddr": "10.0.0.1", 00:10:04.867 "trsvcid": "53722" 00:10:04.867 }, 00:10:04.867 "auth": { 00:10:04.867 "state": "completed", 00:10:04.867 "digest": "sha256", 00:10:04.867 "dhgroup": "ffdhe3072" 00:10:04.867 } 00:10:04.867 } 00:10:04.867 ]' 00:10:04.867 21:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:04.867 21:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:04.867 21:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:05.126 21:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:05.126 21:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:05.126 21:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:05.126 21:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:05.126 21:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:05.384 21:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --hostid 987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-secret DHHC-1:01:ZmYyMzVmZjJjNzJkNDg5YTIzMzBiMTg1Y2FmZmIyOWQN2BnV: --dhchap-ctrl-secret DHHC-1:02:ZDI5YWU2MjQ5OTBkMTFiZGU0NzE5NWFmODYwNjlkNzNlMDcwZmRiNTBlZGE0NThi0FkQ4A==: 00:10:05.951 21:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:05.951 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:05.951 21:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:10:05.951 21:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.951 21:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:05.951 21:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.951 21:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:05.951 21:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:05.952 21:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:06.210 21:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:10:06.210 21:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:06.210 21:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:06.210 21:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:10:06.210 21:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:10:06.210 21:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:06.210 21:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:06.210 21:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.210 21:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:06.210 21:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.210 21:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:06.210 21:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:06.468 00:10:06.468 21:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:06.468 21:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:06.468 21:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:06.726 21:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:06.726 21:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:06.726 21:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.726 21:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:06.726 21:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.727 21:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:06.727 { 00:10:06.727 "cntlid": 21, 00:10:06.727 "qid": 0, 00:10:06.727 "state": "enabled", 00:10:06.727 "thread": "nvmf_tgt_poll_group_000", 00:10:06.727 "listen_address": { 00:10:06.727 "trtype": "TCP", 00:10:06.727 "adrfam": "IPv4", 00:10:06.727 "traddr": "10.0.0.2", 00:10:06.727 "trsvcid": "4420" 00:10:06.727 }, 00:10:06.727 "peer_address": { 00:10:06.727 "trtype": "TCP", 00:10:06.727 "adrfam": "IPv4", 00:10:06.727 "traddr": "10.0.0.1", 00:10:06.727 "trsvcid": "59888" 00:10:06.727 }, 00:10:06.727 "auth": { 00:10:06.727 "state": "completed", 00:10:06.727 "digest": "sha256", 00:10:06.727 "dhgroup": "ffdhe3072" 00:10:06.727 } 00:10:06.727 } 00:10:06.727 ]' 00:10:06.727 21:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:06.727 21:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:06.727 21:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:06.727 21:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:06.727 21:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:06.727 21:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:06.727 21:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:06.727 21:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:06.985 21:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --hostid 987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-secret DHHC-1:02:YzIzM2Q4ZGEyZDFiODIwMjQ2ZjYzNjg1YzNjOGJmNTk4NTE2MTk1NjQ4NGFmM2Jhaf7tFQ==: --dhchap-ctrl-secret DHHC-1:01:ODA2MGEzNTEwNzZlZmEyZjJiZWVjOTA1YzQxMTEwYWFgE6zS: 00:10:07.551 21:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:07.551 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:07.551 21:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:10:07.551 21:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.551 21:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:07.551 21:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.551 21:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:07.551 21:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:07.551 21:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:07.809 21:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:10:07.809 21:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:07.809 21:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:07.809 21:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:10:07.809 21:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:10:07.809 21:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:07.809 21:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-key key3 00:10:07.809 21:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.809 21:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:07.809 21:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.809 21:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:07.809 21:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:08.067 00:10:08.067 21:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:08.067 21:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:08.067 21:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:08.325 21:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:08.325 21:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:08.325 21:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.325 21:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:08.325 21:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.325 21:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:08.325 { 00:10:08.325 "cntlid": 23, 00:10:08.325 "qid": 0, 00:10:08.325 "state": "enabled", 00:10:08.325 "thread": "nvmf_tgt_poll_group_000", 00:10:08.325 "listen_address": { 00:10:08.325 "trtype": "TCP", 00:10:08.325 "adrfam": "IPv4", 00:10:08.325 "traddr": "10.0.0.2", 00:10:08.325 "trsvcid": "4420" 00:10:08.325 }, 00:10:08.325 "peer_address": { 00:10:08.325 "trtype": "TCP", 00:10:08.325 "adrfam": "IPv4", 00:10:08.325 "traddr": "10.0.0.1", 00:10:08.325 "trsvcid": "59924" 00:10:08.325 }, 00:10:08.325 "auth": { 00:10:08.325 "state": "completed", 00:10:08.325 "digest": "sha256", 00:10:08.325 "dhgroup": "ffdhe3072" 00:10:08.325 } 00:10:08.325 } 00:10:08.325 ]' 00:10:08.325 21:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:08.325 21:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:08.325 21:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:08.583 21:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:08.583 21:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:08.583 21:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:08.583 21:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:08.583 21:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:08.841 21:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --hostid 987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-secret DHHC-1:03:Y2I1NTUzYjEzODhhN2NlZGEyZjg4YjBlNzg2YjNmY2JlYjFmYzUyMmE2M2NjYTI4NWQzZjlhYTM5YWM4YWE2Y5RqSzs=: 00:10:09.407 21:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:09.407 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:09.407 21:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:10:09.407 21:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.407 21:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:09.407 21:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.407 21:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:10:09.407 21:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:09.407 21:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:09.407 21:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:09.665 21:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:10:09.665 21:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:09.665 21:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:09.665 21:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:10:09.665 21:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:10:09.665 21:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:09.665 21:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:09.665 21:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.665 21:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:09.665 21:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.665 21:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:09.665 21:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:09.923 00:10:09.923 21:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:09.923 21:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:09.923 21:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:10.181 21:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:10.181 21:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:10.181 21:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.181 21:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:10.181 21:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.181 21:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:10.181 { 00:10:10.181 "cntlid": 25, 00:10:10.181 "qid": 0, 00:10:10.181 "state": "enabled", 00:10:10.181 "thread": "nvmf_tgt_poll_group_000", 00:10:10.181 "listen_address": { 00:10:10.181 "trtype": "TCP", 00:10:10.181 "adrfam": "IPv4", 00:10:10.181 "traddr": "10.0.0.2", 00:10:10.181 "trsvcid": "4420" 00:10:10.181 }, 00:10:10.181 "peer_address": { 00:10:10.181 "trtype": "TCP", 00:10:10.181 "adrfam": "IPv4", 00:10:10.181 "traddr": "10.0.0.1", 00:10:10.181 "trsvcid": "59944" 00:10:10.181 }, 00:10:10.181 "auth": { 00:10:10.181 "state": "completed", 00:10:10.181 "digest": "sha256", 00:10:10.181 "dhgroup": "ffdhe4096" 00:10:10.181 } 00:10:10.181 } 00:10:10.181 ]' 00:10:10.181 21:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:10.181 21:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:10.181 21:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:10.181 21:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:10.181 21:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:10.440 21:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:10.440 21:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:10.440 21:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:10.440 21:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --hostid 987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-secret DHHC-1:00:NjEzZmJkNTFlZjY1ZTkxMjY4MzQwZjBiZDliYzU3YjZmMTYzODg2NjUyMzZkNjY1uLsNhA==: --dhchap-ctrl-secret DHHC-1:03:MzI0YmYyNDYzMjgzZDA2ZDg5Mjc3NzkyYjVmYjUxNzQ3YjE0NDAyOWQ0NzkwNTVhMWNhY2MyOTRlZDg1NWI1Mr1ssro=: 00:10:11.006 21:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:11.006 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:11.006 21:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:10:11.006 21:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.006 21:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:11.006 21:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.006 21:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:11.006 21:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:11.006 21:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:11.266 21:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:10:11.266 21:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:11.266 21:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:11.266 21:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:10:11.266 21:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:10:11.266 21:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:11.266 21:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:11.266 21:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.266 21:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:11.266 21:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.266 21:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:11.266 21:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:11.831 00:10:11.831 21:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:11.831 21:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:11.831 21:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:12.089 21:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:12.089 21:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:12.089 21:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.089 21:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:12.089 21:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.089 21:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:12.089 { 00:10:12.089 "cntlid": 27, 00:10:12.089 "qid": 0, 00:10:12.089 "state": "enabled", 00:10:12.089 "thread": "nvmf_tgt_poll_group_000", 00:10:12.089 "listen_address": { 00:10:12.089 "trtype": "TCP", 00:10:12.089 "adrfam": "IPv4", 00:10:12.089 "traddr": "10.0.0.2", 00:10:12.089 "trsvcid": "4420" 00:10:12.089 }, 00:10:12.089 "peer_address": { 00:10:12.089 "trtype": "TCP", 00:10:12.089 "adrfam": "IPv4", 00:10:12.089 "traddr": "10.0.0.1", 00:10:12.089 "trsvcid": "59972" 00:10:12.089 }, 00:10:12.090 "auth": { 00:10:12.090 "state": "completed", 00:10:12.090 "digest": "sha256", 00:10:12.090 "dhgroup": "ffdhe4096" 00:10:12.090 } 00:10:12.090 } 00:10:12.090 ]' 00:10:12.090 21:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:12.090 21:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:12.090 21:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:12.090 21:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:12.090 21:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:12.090 21:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:12.090 21:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:12.090 21:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:12.348 21:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --hostid 987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-secret DHHC-1:01:ZmYyMzVmZjJjNzJkNDg5YTIzMzBiMTg1Y2FmZmIyOWQN2BnV: --dhchap-ctrl-secret DHHC-1:02:ZDI5YWU2MjQ5OTBkMTFiZGU0NzE5NWFmODYwNjlkNzNlMDcwZmRiNTBlZGE0NThi0FkQ4A==: 00:10:12.914 21:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:12.914 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:12.914 21:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:10:12.914 21:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.914 21:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:12.914 21:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.914 21:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:12.914 21:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:12.914 21:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:13.173 21:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:10:13.173 21:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:13.173 21:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:13.173 21:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:10:13.173 21:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:10:13.173 21:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:13.173 21:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:13.173 21:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.173 21:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:13.173 21:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.173 21:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:13.173 21:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:13.431 00:10:13.431 21:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:13.431 21:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:13.431 21:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:13.689 21:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:13.689 21:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:13.689 21:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.689 21:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:13.689 21:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.689 21:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:13.689 { 00:10:13.689 "cntlid": 29, 00:10:13.689 "qid": 0, 00:10:13.689 "state": "enabled", 00:10:13.689 "thread": "nvmf_tgt_poll_group_000", 00:10:13.689 "listen_address": { 00:10:13.689 "trtype": "TCP", 00:10:13.689 "adrfam": "IPv4", 00:10:13.689 "traddr": "10.0.0.2", 00:10:13.689 "trsvcid": "4420" 00:10:13.689 }, 00:10:13.689 "peer_address": { 00:10:13.689 "trtype": "TCP", 00:10:13.689 "adrfam": "IPv4", 00:10:13.689 "traddr": "10.0.0.1", 00:10:13.689 "trsvcid": "60004" 00:10:13.689 }, 00:10:13.689 "auth": { 00:10:13.689 "state": "completed", 00:10:13.689 "digest": "sha256", 00:10:13.689 "dhgroup": "ffdhe4096" 00:10:13.689 } 00:10:13.689 } 00:10:13.690 ]' 00:10:13.690 21:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:13.690 21:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:13.690 21:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:13.948 21:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:13.948 21:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:13.948 21:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:13.948 21:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:13.948 21:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:14.206 21:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --hostid 987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-secret DHHC-1:02:YzIzM2Q4ZGEyZDFiODIwMjQ2ZjYzNjg1YzNjOGJmNTk4NTE2MTk1NjQ4NGFmM2Jhaf7tFQ==: --dhchap-ctrl-secret DHHC-1:01:ODA2MGEzNTEwNzZlZmEyZjJiZWVjOTA1YzQxMTEwYWFgE6zS: 00:10:14.773 21:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:14.773 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:14.773 21:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:10:14.773 21:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.773 21:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:14.773 21:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.773 21:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:14.773 21:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:14.773 21:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:15.032 21:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:10:15.032 21:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:15.032 21:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:15.032 21:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:10:15.032 21:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:10:15.032 21:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:15.032 21:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-key key3 00:10:15.032 21:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.032 21:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:15.032 21:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.032 21:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:15.032 21:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:15.290 00:10:15.290 21:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:15.290 21:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:15.290 21:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:15.548 21:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:15.548 21:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:15.548 21:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.548 21:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:15.548 21:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.548 21:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:15.548 { 00:10:15.548 "cntlid": 31, 00:10:15.548 "qid": 0, 00:10:15.548 "state": "enabled", 00:10:15.548 "thread": "nvmf_tgt_poll_group_000", 00:10:15.548 "listen_address": { 00:10:15.548 "trtype": "TCP", 00:10:15.548 "adrfam": "IPv4", 00:10:15.548 "traddr": "10.0.0.2", 00:10:15.548 "trsvcid": "4420" 00:10:15.548 }, 00:10:15.548 "peer_address": { 00:10:15.548 "trtype": "TCP", 00:10:15.548 "adrfam": "IPv4", 00:10:15.548 "traddr": "10.0.0.1", 00:10:15.548 "trsvcid": "38528" 00:10:15.548 }, 00:10:15.548 "auth": { 00:10:15.548 "state": "completed", 00:10:15.548 "digest": "sha256", 00:10:15.548 "dhgroup": "ffdhe4096" 00:10:15.548 } 00:10:15.548 } 00:10:15.548 ]' 00:10:15.548 21:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:15.548 21:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:15.548 21:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:15.548 21:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:15.548 21:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:15.548 21:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:15.548 21:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:15.548 21:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:15.807 21:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --hostid 987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-secret DHHC-1:03:Y2I1NTUzYjEzODhhN2NlZGEyZjg4YjBlNzg2YjNmY2JlYjFmYzUyMmE2M2NjYTI4NWQzZjlhYTM5YWM4YWE2Y5RqSzs=: 00:10:16.372 21:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:16.372 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:16.372 21:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:10:16.372 21:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.372 21:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:16.372 21:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.372 21:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:10:16.372 21:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:16.372 21:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:16.372 21:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:16.630 21:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:10:16.630 21:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:16.630 21:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:16.630 21:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:10:16.630 21:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:10:16.630 21:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:16.630 21:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:16.630 21:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.630 21:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:16.630 21:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.630 21:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:16.630 21:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:16.887 00:10:16.887 21:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:16.887 21:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:16.887 21:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:17.146 21:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:17.146 21:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:17.146 21:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.146 21:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:17.146 21:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.146 21:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:17.146 { 00:10:17.146 "cntlid": 33, 00:10:17.146 "qid": 0, 00:10:17.146 "state": "enabled", 00:10:17.146 "thread": "nvmf_tgt_poll_group_000", 00:10:17.146 "listen_address": { 00:10:17.146 "trtype": "TCP", 00:10:17.146 "adrfam": "IPv4", 00:10:17.146 "traddr": "10.0.0.2", 00:10:17.146 "trsvcid": "4420" 00:10:17.146 }, 00:10:17.146 "peer_address": { 00:10:17.146 "trtype": "TCP", 00:10:17.146 "adrfam": "IPv4", 00:10:17.146 "traddr": "10.0.0.1", 00:10:17.146 "trsvcid": "38556" 00:10:17.146 }, 00:10:17.146 "auth": { 00:10:17.146 "state": "completed", 00:10:17.146 "digest": "sha256", 00:10:17.146 "dhgroup": "ffdhe6144" 00:10:17.146 } 00:10:17.146 } 00:10:17.146 ]' 00:10:17.146 21:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:17.146 21:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:17.146 21:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:17.405 21:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:17.405 21:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:17.405 21:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:17.405 21:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:17.405 21:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:17.663 21:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --hostid 987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-secret DHHC-1:00:NjEzZmJkNTFlZjY1ZTkxMjY4MzQwZjBiZDliYzU3YjZmMTYzODg2NjUyMzZkNjY1uLsNhA==: --dhchap-ctrl-secret DHHC-1:03:MzI0YmYyNDYzMjgzZDA2ZDg5Mjc3NzkyYjVmYjUxNzQ3YjE0NDAyOWQ0NzkwNTVhMWNhY2MyOTRlZDg1NWI1Mr1ssro=: 00:10:18.228 21:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:18.228 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:18.228 21:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:10:18.228 21:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.228 21:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:18.228 21:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.228 21:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:18.228 21:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:18.228 21:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:18.487 21:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:10:18.487 21:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:18.487 21:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:18.487 21:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:10:18.487 21:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:10:18.487 21:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:18.487 21:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:18.487 21:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.487 21:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:18.487 21:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.487 21:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:18.487 21:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:18.745 00:10:18.745 21:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:18.745 21:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:18.745 21:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:19.002 21:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:19.002 21:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:19.002 21:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.002 21:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:19.002 21:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.002 21:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:19.002 { 00:10:19.002 "cntlid": 35, 00:10:19.002 "qid": 0, 00:10:19.002 "state": "enabled", 00:10:19.002 "thread": "nvmf_tgt_poll_group_000", 00:10:19.002 "listen_address": { 00:10:19.002 "trtype": "TCP", 00:10:19.002 "adrfam": "IPv4", 00:10:19.002 "traddr": "10.0.0.2", 00:10:19.002 "trsvcid": "4420" 00:10:19.002 }, 00:10:19.002 "peer_address": { 00:10:19.002 "trtype": "TCP", 00:10:19.002 "adrfam": "IPv4", 00:10:19.002 "traddr": "10.0.0.1", 00:10:19.002 "trsvcid": "38594" 00:10:19.002 }, 00:10:19.002 "auth": { 00:10:19.002 "state": "completed", 00:10:19.002 "digest": "sha256", 00:10:19.002 "dhgroup": "ffdhe6144" 00:10:19.002 } 00:10:19.002 } 00:10:19.002 ]' 00:10:19.002 21:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:19.002 21:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:19.002 21:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:19.260 21:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:19.260 21:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:19.260 21:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:19.260 21:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:19.260 21:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:19.518 21:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --hostid 987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-secret DHHC-1:01:ZmYyMzVmZjJjNzJkNDg5YTIzMzBiMTg1Y2FmZmIyOWQN2BnV: --dhchap-ctrl-secret DHHC-1:02:ZDI5YWU2MjQ5OTBkMTFiZGU0NzE5NWFmODYwNjlkNzNlMDcwZmRiNTBlZGE0NThi0FkQ4A==: 00:10:20.084 21:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:20.084 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:20.084 21:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:10:20.084 21:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.084 21:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:20.084 21:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.084 21:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:20.084 21:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:20.084 21:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:20.084 21:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:10:20.084 21:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:20.084 21:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:20.084 21:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:10:20.084 21:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:10:20.084 21:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:20.084 21:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:20.084 21:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.084 21:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:20.084 21:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.084 21:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:20.084 21:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:20.650 00:10:20.651 21:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:20.651 21:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:20.651 21:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:20.908 21:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:20.908 21:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:20.908 21:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.908 21:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:20.908 21:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.908 21:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:20.908 { 00:10:20.908 "cntlid": 37, 00:10:20.908 "qid": 0, 00:10:20.908 "state": "enabled", 00:10:20.908 "thread": "nvmf_tgt_poll_group_000", 00:10:20.908 "listen_address": { 00:10:20.908 "trtype": "TCP", 00:10:20.908 "adrfam": "IPv4", 00:10:20.908 "traddr": "10.0.0.2", 00:10:20.908 "trsvcid": "4420" 00:10:20.908 }, 00:10:20.908 "peer_address": { 00:10:20.908 "trtype": "TCP", 00:10:20.908 "adrfam": "IPv4", 00:10:20.908 "traddr": "10.0.0.1", 00:10:20.908 "trsvcid": "38612" 00:10:20.908 }, 00:10:20.908 "auth": { 00:10:20.908 "state": "completed", 00:10:20.908 "digest": "sha256", 00:10:20.908 "dhgroup": "ffdhe6144" 00:10:20.908 } 00:10:20.908 } 00:10:20.908 ]' 00:10:20.908 21:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:20.908 21:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:20.909 21:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:20.909 21:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:20.909 21:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:20.909 21:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:20.909 21:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:20.909 21:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:21.167 21:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --hostid 987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-secret DHHC-1:02:YzIzM2Q4ZGEyZDFiODIwMjQ2ZjYzNjg1YzNjOGJmNTk4NTE2MTk1NjQ4NGFmM2Jhaf7tFQ==: --dhchap-ctrl-secret DHHC-1:01:ODA2MGEzNTEwNzZlZmEyZjJiZWVjOTA1YzQxMTEwYWFgE6zS: 00:10:21.734 21:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:21.734 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:21.734 21:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:10:21.734 21:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.734 21:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:21.734 21:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.734 21:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:21.734 21:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:21.734 21:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:21.993 21:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:10:21.993 21:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:21.993 21:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:21.993 21:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:10:21.993 21:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:10:21.993 21:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:21.993 21:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-key key3 00:10:21.993 21:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.993 21:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:21.993 21:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.993 21:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:21.993 21:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:22.251 00:10:22.251 21:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:22.251 21:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:22.251 21:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:22.509 21:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:22.509 21:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:22.509 21:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.509 21:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:22.509 21:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.509 21:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:22.509 { 00:10:22.509 "cntlid": 39, 00:10:22.509 "qid": 0, 00:10:22.509 "state": "enabled", 00:10:22.509 "thread": "nvmf_tgt_poll_group_000", 00:10:22.509 "listen_address": { 00:10:22.509 "trtype": "TCP", 00:10:22.509 "adrfam": "IPv4", 00:10:22.509 "traddr": "10.0.0.2", 00:10:22.509 "trsvcid": "4420" 00:10:22.509 }, 00:10:22.509 "peer_address": { 00:10:22.509 "trtype": "TCP", 00:10:22.509 "adrfam": "IPv4", 00:10:22.509 "traddr": "10.0.0.1", 00:10:22.509 "trsvcid": "38630" 00:10:22.509 }, 00:10:22.509 "auth": { 00:10:22.509 "state": "completed", 00:10:22.509 "digest": "sha256", 00:10:22.509 "dhgroup": "ffdhe6144" 00:10:22.509 } 00:10:22.509 } 00:10:22.509 ]' 00:10:22.509 21:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:22.771 21:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:22.771 21:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:22.771 21:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:22.771 21:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:22.771 21:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:22.771 21:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:22.771 21:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:23.028 21:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --hostid 987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-secret DHHC-1:03:Y2I1NTUzYjEzODhhN2NlZGEyZjg4YjBlNzg2YjNmY2JlYjFmYzUyMmE2M2NjYTI4NWQzZjlhYTM5YWM4YWE2Y5RqSzs=: 00:10:23.594 21:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:23.594 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:23.594 21:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:10:23.594 21:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.594 21:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:23.594 21:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.594 21:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:10:23.594 21:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:23.594 21:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:23.594 21:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:23.853 21:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:10:23.853 21:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:23.853 21:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:23.853 21:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:10:23.853 21:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:10:23.853 21:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:23.853 21:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:23.853 21:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.853 21:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:23.853 21:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.853 21:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:23.853 21:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:24.421 00:10:24.421 21:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:24.421 21:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:24.421 21:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:24.679 21:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:24.679 21:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:24.679 21:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.679 21:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:24.679 21:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.679 21:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:24.679 { 00:10:24.679 "cntlid": 41, 00:10:24.679 "qid": 0, 00:10:24.679 "state": "enabled", 00:10:24.679 "thread": "nvmf_tgt_poll_group_000", 00:10:24.679 "listen_address": { 00:10:24.679 "trtype": "TCP", 00:10:24.679 "adrfam": "IPv4", 00:10:24.679 "traddr": "10.0.0.2", 00:10:24.679 "trsvcid": "4420" 00:10:24.679 }, 00:10:24.679 "peer_address": { 00:10:24.679 "trtype": "TCP", 00:10:24.679 "adrfam": "IPv4", 00:10:24.679 "traddr": "10.0.0.1", 00:10:24.679 "trsvcid": "38670" 00:10:24.679 }, 00:10:24.679 "auth": { 00:10:24.679 "state": "completed", 00:10:24.679 "digest": "sha256", 00:10:24.679 "dhgroup": "ffdhe8192" 00:10:24.679 } 00:10:24.679 } 00:10:24.679 ]' 00:10:24.679 21:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:24.679 21:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:24.679 21:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:24.679 21:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:24.679 21:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:24.679 21:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:24.679 21:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:24.679 21:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:24.938 21:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --hostid 987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-secret DHHC-1:00:NjEzZmJkNTFlZjY1ZTkxMjY4MzQwZjBiZDliYzU3YjZmMTYzODg2NjUyMzZkNjY1uLsNhA==: --dhchap-ctrl-secret DHHC-1:03:MzI0YmYyNDYzMjgzZDA2ZDg5Mjc3NzkyYjVmYjUxNzQ3YjE0NDAyOWQ0NzkwNTVhMWNhY2MyOTRlZDg1NWI1Mr1ssro=: 00:10:25.503 21:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:25.503 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:25.503 21:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:10:25.503 21:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.504 21:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:25.504 21:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.504 21:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:25.504 21:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:25.504 21:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:25.761 21:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:10:25.761 21:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:25.761 21:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:25.761 21:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:10:25.761 21:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:10:25.761 21:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:25.761 21:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:25.761 21:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.761 21:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:25.761 21:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.761 21:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:25.761 21:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:26.328 00:10:26.328 21:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:26.328 21:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:26.328 21:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:26.587 21:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:26.587 21:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:26.587 21:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.587 21:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:26.587 21:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.587 21:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:26.587 { 00:10:26.587 "cntlid": 43, 00:10:26.587 "qid": 0, 00:10:26.587 "state": "enabled", 00:10:26.587 "thread": "nvmf_tgt_poll_group_000", 00:10:26.587 "listen_address": { 00:10:26.587 "trtype": "TCP", 00:10:26.587 "adrfam": "IPv4", 00:10:26.587 "traddr": "10.0.0.2", 00:10:26.587 "trsvcid": "4420" 00:10:26.587 }, 00:10:26.587 "peer_address": { 00:10:26.587 "trtype": "TCP", 00:10:26.587 "adrfam": "IPv4", 00:10:26.587 "traddr": "10.0.0.1", 00:10:26.587 "trsvcid": "36208" 00:10:26.587 }, 00:10:26.587 "auth": { 00:10:26.587 "state": "completed", 00:10:26.587 "digest": "sha256", 00:10:26.587 "dhgroup": "ffdhe8192" 00:10:26.587 } 00:10:26.587 } 00:10:26.587 ]' 00:10:26.587 21:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:26.587 21:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:26.587 21:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:26.587 21:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:26.587 21:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:26.587 21:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:26.587 21:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:26.587 21:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:26.845 21:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --hostid 987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-secret DHHC-1:01:ZmYyMzVmZjJjNzJkNDg5YTIzMzBiMTg1Y2FmZmIyOWQN2BnV: --dhchap-ctrl-secret DHHC-1:02:ZDI5YWU2MjQ5OTBkMTFiZGU0NzE5NWFmODYwNjlkNzNlMDcwZmRiNTBlZGE0NThi0FkQ4A==: 00:10:27.410 21:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:27.669 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:27.669 21:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:10:27.669 21:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.669 21:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:27.669 21:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.669 21:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:27.669 21:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:27.669 21:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:27.669 21:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:10:27.669 21:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:27.669 21:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:27.669 21:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:10:27.669 21:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:10:27.669 21:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:27.669 21:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:27.669 21:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.669 21:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:27.669 21:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.669 21:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:27.669 21:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:28.267 00:10:28.267 21:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:28.267 21:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:28.267 21:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:28.581 21:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:28.581 21:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:28.581 21:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.581 21:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:28.581 21:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.581 21:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:28.581 { 00:10:28.581 "cntlid": 45, 00:10:28.581 "qid": 0, 00:10:28.581 "state": "enabled", 00:10:28.581 "thread": "nvmf_tgt_poll_group_000", 00:10:28.581 "listen_address": { 00:10:28.581 "trtype": "TCP", 00:10:28.581 "adrfam": "IPv4", 00:10:28.581 "traddr": "10.0.0.2", 00:10:28.581 "trsvcid": "4420" 00:10:28.581 }, 00:10:28.581 "peer_address": { 00:10:28.581 "trtype": "TCP", 00:10:28.581 "adrfam": "IPv4", 00:10:28.581 "traddr": "10.0.0.1", 00:10:28.581 "trsvcid": "36246" 00:10:28.581 }, 00:10:28.581 "auth": { 00:10:28.581 "state": "completed", 00:10:28.581 "digest": "sha256", 00:10:28.581 "dhgroup": "ffdhe8192" 00:10:28.581 } 00:10:28.581 } 00:10:28.581 ]' 00:10:28.581 21:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:28.581 21:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:28.581 21:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:28.840 21:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:28.840 21:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:28.840 21:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:28.840 21:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:28.840 21:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:28.840 21:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --hostid 987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-secret DHHC-1:02:YzIzM2Q4ZGEyZDFiODIwMjQ2ZjYzNjg1YzNjOGJmNTk4NTE2MTk1NjQ4NGFmM2Jhaf7tFQ==: --dhchap-ctrl-secret DHHC-1:01:ODA2MGEzNTEwNzZlZmEyZjJiZWVjOTA1YzQxMTEwYWFgE6zS: 00:10:29.775 21:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:29.775 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:29.775 21:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:10:29.775 21:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.775 21:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:29.775 21:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.775 21:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:29.775 21:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:29.775 21:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:29.775 21:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:10:29.775 21:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:29.775 21:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:29.775 21:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:10:29.775 21:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:10:29.775 21:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:29.775 21:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-key key3 00:10:29.775 21:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.775 21:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:29.775 21:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.775 21:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:29.775 21:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:30.341 00:10:30.341 21:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:30.341 21:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:30.341 21:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:30.600 21:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:30.600 21:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:30.600 21:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.600 21:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:30.600 21:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.600 21:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:30.600 { 00:10:30.600 "cntlid": 47, 00:10:30.600 "qid": 0, 00:10:30.600 "state": "enabled", 00:10:30.600 "thread": "nvmf_tgt_poll_group_000", 00:10:30.600 "listen_address": { 00:10:30.600 "trtype": "TCP", 00:10:30.600 "adrfam": "IPv4", 00:10:30.600 "traddr": "10.0.0.2", 00:10:30.600 "trsvcid": "4420" 00:10:30.600 }, 00:10:30.600 "peer_address": { 00:10:30.600 "trtype": "TCP", 00:10:30.600 "adrfam": "IPv4", 00:10:30.600 "traddr": "10.0.0.1", 00:10:30.600 "trsvcid": "36258" 00:10:30.600 }, 00:10:30.600 "auth": { 00:10:30.600 "state": "completed", 00:10:30.600 "digest": "sha256", 00:10:30.600 "dhgroup": "ffdhe8192" 00:10:30.600 } 00:10:30.600 } 00:10:30.600 ]' 00:10:30.600 21:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:30.600 21:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:30.600 21:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:30.858 21:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:30.858 21:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:30.858 21:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:30.858 21:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:30.858 21:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:31.115 21:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --hostid 987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-secret DHHC-1:03:Y2I1NTUzYjEzODhhN2NlZGEyZjg4YjBlNzg2YjNmY2JlYjFmYzUyMmE2M2NjYTI4NWQzZjlhYTM5YWM4YWE2Y5RqSzs=: 00:10:31.681 21:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:31.681 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:31.681 21:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:10:31.681 21:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.681 21:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:31.681 21:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.681 21:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:10:31.681 21:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:10:31.681 21:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:31.681 21:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:31.681 21:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:31.939 21:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:10:31.939 21:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:31.939 21:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:10:31.939 21:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:10:31.939 21:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:10:31.939 21:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:31.939 21:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:31.939 21:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.939 21:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:31.939 21:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.939 21:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:31.939 21:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:32.197 00:10:32.197 21:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:32.197 21:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:32.197 21:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:32.456 21:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:32.456 21:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:32.456 21:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.456 21:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:32.456 21:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.456 21:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:32.456 { 00:10:32.456 "cntlid": 49, 00:10:32.456 "qid": 0, 00:10:32.456 "state": "enabled", 00:10:32.456 "thread": "nvmf_tgt_poll_group_000", 00:10:32.456 "listen_address": { 00:10:32.456 "trtype": "TCP", 00:10:32.456 "adrfam": "IPv4", 00:10:32.456 "traddr": "10.0.0.2", 00:10:32.456 "trsvcid": "4420" 00:10:32.456 }, 00:10:32.456 "peer_address": { 00:10:32.456 "trtype": "TCP", 00:10:32.456 "adrfam": "IPv4", 00:10:32.456 "traddr": "10.0.0.1", 00:10:32.456 "trsvcid": "36298" 00:10:32.456 }, 00:10:32.456 "auth": { 00:10:32.456 "state": "completed", 00:10:32.456 "digest": "sha384", 00:10:32.456 "dhgroup": "null" 00:10:32.456 } 00:10:32.456 } 00:10:32.456 ]' 00:10:32.456 21:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:32.456 21:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:32.456 21:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:32.456 21:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:10:32.456 21:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:32.456 21:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:32.456 21:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:32.456 21:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:32.714 21:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --hostid 987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-secret DHHC-1:00:NjEzZmJkNTFlZjY1ZTkxMjY4MzQwZjBiZDliYzU3YjZmMTYzODg2NjUyMzZkNjY1uLsNhA==: --dhchap-ctrl-secret DHHC-1:03:MzI0YmYyNDYzMjgzZDA2ZDg5Mjc3NzkyYjVmYjUxNzQ3YjE0NDAyOWQ0NzkwNTVhMWNhY2MyOTRlZDg1NWI1Mr1ssro=: 00:10:33.281 21:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:33.281 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:33.281 21:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:10:33.281 21:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.281 21:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:33.281 21:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.281 21:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:33.281 21:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:33.281 21:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:33.540 21:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:10:33.540 21:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:33.540 21:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:10:33.540 21:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:10:33.540 21:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:10:33.540 21:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:33.540 21:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:33.540 21:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.540 21:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:33.540 21:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.540 21:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:33.540 21:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:33.799 00:10:33.799 21:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:33.799 21:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:33.799 21:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:34.058 21:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:34.058 21:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:34.058 21:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.058 21:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:34.058 21:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.058 21:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:34.058 { 00:10:34.058 "cntlid": 51, 00:10:34.058 "qid": 0, 00:10:34.058 "state": "enabled", 00:10:34.058 "thread": "nvmf_tgt_poll_group_000", 00:10:34.058 "listen_address": { 00:10:34.058 "trtype": "TCP", 00:10:34.058 "adrfam": "IPv4", 00:10:34.058 "traddr": "10.0.0.2", 00:10:34.058 "trsvcid": "4420" 00:10:34.058 }, 00:10:34.058 "peer_address": { 00:10:34.058 "trtype": "TCP", 00:10:34.058 "adrfam": "IPv4", 00:10:34.058 "traddr": "10.0.0.1", 00:10:34.058 "trsvcid": "36316" 00:10:34.058 }, 00:10:34.058 "auth": { 00:10:34.058 "state": "completed", 00:10:34.058 "digest": "sha384", 00:10:34.058 "dhgroup": "null" 00:10:34.058 } 00:10:34.058 } 00:10:34.058 ]' 00:10:34.058 21:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:34.058 21:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:34.058 21:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:34.058 21:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:10:34.058 21:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:34.058 21:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:34.058 21:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:34.058 21:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:34.317 21:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --hostid 987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-secret DHHC-1:01:ZmYyMzVmZjJjNzJkNDg5YTIzMzBiMTg1Y2FmZmIyOWQN2BnV: --dhchap-ctrl-secret DHHC-1:02:ZDI5YWU2MjQ5OTBkMTFiZGU0NzE5NWFmODYwNjlkNzNlMDcwZmRiNTBlZGE0NThi0FkQ4A==: 00:10:34.883 21:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:34.883 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:34.883 21:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:10:34.883 21:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.883 21:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:34.883 21:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.883 21:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:34.883 21:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:34.884 21:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:35.141 21:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:10:35.141 21:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:35.141 21:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:10:35.141 21:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:10:35.141 21:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:10:35.141 21:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:35.141 21:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:35.141 21:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.141 21:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:35.141 21:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.141 21:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:35.141 21:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:35.400 00:10:35.400 21:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:35.400 21:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:35.400 21:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:35.658 21:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:35.658 21:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:35.658 21:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.658 21:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:35.658 21:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.658 21:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:35.658 { 00:10:35.658 "cntlid": 53, 00:10:35.658 "qid": 0, 00:10:35.658 "state": "enabled", 00:10:35.658 "thread": "nvmf_tgt_poll_group_000", 00:10:35.658 "listen_address": { 00:10:35.658 "trtype": "TCP", 00:10:35.658 "adrfam": "IPv4", 00:10:35.658 "traddr": "10.0.0.2", 00:10:35.658 "trsvcid": "4420" 00:10:35.658 }, 00:10:35.658 "peer_address": { 00:10:35.658 "trtype": "TCP", 00:10:35.658 "adrfam": "IPv4", 00:10:35.658 "traddr": "10.0.0.1", 00:10:35.658 "trsvcid": "38896" 00:10:35.658 }, 00:10:35.658 "auth": { 00:10:35.658 "state": "completed", 00:10:35.658 "digest": "sha384", 00:10:35.658 "dhgroup": "null" 00:10:35.658 } 00:10:35.658 } 00:10:35.658 ]' 00:10:35.658 21:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:35.658 21:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:35.658 21:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:35.916 21:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:10:35.916 21:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:35.916 21:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:35.916 21:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:35.916 21:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:35.916 21:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --hostid 987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-secret DHHC-1:02:YzIzM2Q4ZGEyZDFiODIwMjQ2ZjYzNjg1YzNjOGJmNTk4NTE2MTk1NjQ4NGFmM2Jhaf7tFQ==: --dhchap-ctrl-secret DHHC-1:01:ODA2MGEzNTEwNzZlZmEyZjJiZWVjOTA1YzQxMTEwYWFgE6zS: 00:10:36.852 21:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:36.852 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:36.852 21:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:10:36.852 21:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.852 21:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:36.852 21:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.852 21:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:36.852 21:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:36.852 21:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:36.852 21:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:10:36.852 21:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:36.852 21:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:10:36.852 21:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:10:36.852 21:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:10:36.852 21:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:36.852 21:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-key key3 00:10:36.852 21:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.852 21:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:36.852 21:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.852 21:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:36.852 21:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:37.111 00:10:37.111 21:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:37.111 21:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:37.111 21:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:37.370 21:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:37.370 21:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:37.370 21:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.370 21:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:37.370 21:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.370 21:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:37.370 { 00:10:37.370 "cntlid": 55, 00:10:37.370 "qid": 0, 00:10:37.370 "state": "enabled", 00:10:37.370 "thread": "nvmf_tgt_poll_group_000", 00:10:37.370 "listen_address": { 00:10:37.370 "trtype": "TCP", 00:10:37.370 "adrfam": "IPv4", 00:10:37.370 "traddr": "10.0.0.2", 00:10:37.370 "trsvcid": "4420" 00:10:37.370 }, 00:10:37.370 "peer_address": { 00:10:37.370 "trtype": "TCP", 00:10:37.370 "adrfam": "IPv4", 00:10:37.370 "traddr": "10.0.0.1", 00:10:37.370 "trsvcid": "38910" 00:10:37.370 }, 00:10:37.370 "auth": { 00:10:37.370 "state": "completed", 00:10:37.370 "digest": "sha384", 00:10:37.370 "dhgroup": "null" 00:10:37.370 } 00:10:37.370 } 00:10:37.370 ]' 00:10:37.370 21:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:37.629 21:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:37.629 21:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:37.629 21:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:10:37.629 21:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:37.629 21:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:37.629 21:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:37.629 21:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:37.888 21:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --hostid 987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-secret DHHC-1:03:Y2I1NTUzYjEzODhhN2NlZGEyZjg4YjBlNzg2YjNmY2JlYjFmYzUyMmE2M2NjYTI4NWQzZjlhYTM5YWM4YWE2Y5RqSzs=: 00:10:38.456 21:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:38.456 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:38.456 21:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:10:38.456 21:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.456 21:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:38.456 21:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.456 21:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:10:38.456 21:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:38.456 21:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:38.456 21:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:38.715 21:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:10:38.715 21:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:38.715 21:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:10:38.715 21:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:10:38.715 21:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:10:38.715 21:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:38.715 21:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:38.715 21:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.715 21:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:38.715 21:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.715 21:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:38.715 21:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:38.974 00:10:38.974 21:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:38.974 21:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:38.974 21:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:39.233 21:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:39.233 21:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:39.233 21:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.233 21:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:39.233 21:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.233 21:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:39.233 { 00:10:39.233 "cntlid": 57, 00:10:39.233 "qid": 0, 00:10:39.233 "state": "enabled", 00:10:39.233 "thread": "nvmf_tgt_poll_group_000", 00:10:39.233 "listen_address": { 00:10:39.233 "trtype": "TCP", 00:10:39.233 "adrfam": "IPv4", 00:10:39.233 "traddr": "10.0.0.2", 00:10:39.233 "trsvcid": "4420" 00:10:39.233 }, 00:10:39.233 "peer_address": { 00:10:39.233 "trtype": "TCP", 00:10:39.233 "adrfam": "IPv4", 00:10:39.233 "traddr": "10.0.0.1", 00:10:39.233 "trsvcid": "38942" 00:10:39.233 }, 00:10:39.233 "auth": { 00:10:39.233 "state": "completed", 00:10:39.233 "digest": "sha384", 00:10:39.233 "dhgroup": "ffdhe2048" 00:10:39.233 } 00:10:39.233 } 00:10:39.233 ]' 00:10:39.233 21:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:39.233 21:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:39.233 21:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:39.492 21:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:39.492 21:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:39.492 21:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:39.492 21:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:39.492 21:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:39.751 21:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --hostid 987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-secret DHHC-1:00:NjEzZmJkNTFlZjY1ZTkxMjY4MzQwZjBiZDliYzU3YjZmMTYzODg2NjUyMzZkNjY1uLsNhA==: --dhchap-ctrl-secret DHHC-1:03:MzI0YmYyNDYzMjgzZDA2ZDg5Mjc3NzkyYjVmYjUxNzQ3YjE0NDAyOWQ0NzkwNTVhMWNhY2MyOTRlZDg1NWI1Mr1ssro=: 00:10:40.317 21:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:40.318 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:40.318 21:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:10:40.318 21:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.318 21:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:40.318 21:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.318 21:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:40.318 21:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:40.318 21:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:40.576 21:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:10:40.576 21:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:40.576 21:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:10:40.576 21:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:10:40.576 21:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:10:40.576 21:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:40.576 21:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:40.576 21:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.576 21:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:40.576 21:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.576 21:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:40.576 21:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:40.835 00:10:40.835 21:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:40.835 21:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:40.835 21:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:41.094 21:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:41.094 21:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:41.094 21:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.094 21:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:41.094 21:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.094 21:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:41.094 { 00:10:41.094 "cntlid": 59, 00:10:41.094 "qid": 0, 00:10:41.094 "state": "enabled", 00:10:41.094 "thread": "nvmf_tgt_poll_group_000", 00:10:41.094 "listen_address": { 00:10:41.094 "trtype": "TCP", 00:10:41.094 "adrfam": "IPv4", 00:10:41.094 "traddr": "10.0.0.2", 00:10:41.094 "trsvcid": "4420" 00:10:41.094 }, 00:10:41.094 "peer_address": { 00:10:41.094 "trtype": "TCP", 00:10:41.094 "adrfam": "IPv4", 00:10:41.094 "traddr": "10.0.0.1", 00:10:41.094 "trsvcid": "38960" 00:10:41.094 }, 00:10:41.094 "auth": { 00:10:41.094 "state": "completed", 00:10:41.094 "digest": "sha384", 00:10:41.094 "dhgroup": "ffdhe2048" 00:10:41.094 } 00:10:41.094 } 00:10:41.094 ]' 00:10:41.094 21:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:41.094 21:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:41.094 21:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:41.094 21:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:41.094 21:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:41.094 21:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:41.094 21:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:41.094 21:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:41.353 21:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --hostid 987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-secret DHHC-1:01:ZmYyMzVmZjJjNzJkNDg5YTIzMzBiMTg1Y2FmZmIyOWQN2BnV: --dhchap-ctrl-secret DHHC-1:02:ZDI5YWU2MjQ5OTBkMTFiZGU0NzE5NWFmODYwNjlkNzNlMDcwZmRiNTBlZGE0NThi0FkQ4A==: 00:10:41.920 21:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:41.920 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:41.920 21:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:10:41.920 21:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.920 21:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:41.920 21:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.920 21:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:41.920 21:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:41.920 21:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:42.179 21:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:10:42.179 21:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:42.179 21:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:10:42.179 21:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:10:42.179 21:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:10:42.179 21:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:42.179 21:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:42.179 21:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.179 21:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:42.179 21:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.179 21:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:42.179 21:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:42.747 00:10:42.747 21:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:42.747 21:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:42.747 21:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:42.747 21:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:42.747 21:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:42.747 21:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.747 21:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:42.747 21:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.747 21:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:42.747 { 00:10:42.747 "cntlid": 61, 00:10:42.747 "qid": 0, 00:10:42.747 "state": "enabled", 00:10:42.747 "thread": "nvmf_tgt_poll_group_000", 00:10:42.747 "listen_address": { 00:10:42.747 "trtype": "TCP", 00:10:42.747 "adrfam": "IPv4", 00:10:42.747 "traddr": "10.0.0.2", 00:10:42.747 "trsvcid": "4420" 00:10:42.747 }, 00:10:42.747 "peer_address": { 00:10:42.747 "trtype": "TCP", 00:10:42.747 "adrfam": "IPv4", 00:10:42.747 "traddr": "10.0.0.1", 00:10:42.747 "trsvcid": "38998" 00:10:42.747 }, 00:10:42.747 "auth": { 00:10:42.747 "state": "completed", 00:10:42.747 "digest": "sha384", 00:10:42.747 "dhgroup": "ffdhe2048" 00:10:42.747 } 00:10:42.747 } 00:10:42.747 ]' 00:10:42.747 21:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:43.006 21:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:43.006 21:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:43.006 21:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:43.006 21:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:43.006 21:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:43.006 21:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:43.006 21:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:43.265 21:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --hostid 987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-secret DHHC-1:02:YzIzM2Q4ZGEyZDFiODIwMjQ2ZjYzNjg1YzNjOGJmNTk4NTE2MTk1NjQ4NGFmM2Jhaf7tFQ==: --dhchap-ctrl-secret DHHC-1:01:ODA2MGEzNTEwNzZlZmEyZjJiZWVjOTA1YzQxMTEwYWFgE6zS: 00:10:43.832 21:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:43.832 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:43.832 21:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:10:43.832 21:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.832 21:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:43.832 21:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.832 21:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:43.832 21:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:43.832 21:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:44.090 21:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:10:44.090 21:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:44.090 21:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:10:44.090 21:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:10:44.090 21:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:10:44.090 21:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:44.090 21:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-key key3 00:10:44.090 21:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.090 21:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:44.090 21:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.090 21:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:44.090 21:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:44.348 00:10:44.348 21:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:44.348 21:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:44.348 21:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:44.607 21:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:44.607 21:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:44.607 21:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.607 21:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:44.607 21:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.607 21:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:44.607 { 00:10:44.607 "cntlid": 63, 00:10:44.607 "qid": 0, 00:10:44.607 "state": "enabled", 00:10:44.607 "thread": "nvmf_tgt_poll_group_000", 00:10:44.607 "listen_address": { 00:10:44.607 "trtype": "TCP", 00:10:44.607 "adrfam": "IPv4", 00:10:44.607 "traddr": "10.0.0.2", 00:10:44.607 "trsvcid": "4420" 00:10:44.607 }, 00:10:44.607 "peer_address": { 00:10:44.607 "trtype": "TCP", 00:10:44.607 "adrfam": "IPv4", 00:10:44.607 "traddr": "10.0.0.1", 00:10:44.607 "trsvcid": "39006" 00:10:44.607 }, 00:10:44.607 "auth": { 00:10:44.607 "state": "completed", 00:10:44.607 "digest": "sha384", 00:10:44.607 "dhgroup": "ffdhe2048" 00:10:44.607 } 00:10:44.607 } 00:10:44.607 ]' 00:10:44.607 21:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:44.867 21:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:44.867 21:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:44.867 21:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:44.867 21:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:44.867 21:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:44.867 21:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:44.867 21:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:45.126 21:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --hostid 987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-secret DHHC-1:03:Y2I1NTUzYjEzODhhN2NlZGEyZjg4YjBlNzg2YjNmY2JlYjFmYzUyMmE2M2NjYTI4NWQzZjlhYTM5YWM4YWE2Y5RqSzs=: 00:10:45.693 21:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:45.693 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:45.693 21:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:10:45.693 21:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.693 21:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:45.693 21:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.693 21:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:10:45.693 21:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:45.693 21:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:10:45.693 21:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:10:45.953 21:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:10:45.953 21:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:45.953 21:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:10:45.953 21:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:10:45.953 21:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:10:45.953 21:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:45.953 21:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:45.953 21:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.953 21:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:45.953 21:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.953 21:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:45.953 21:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:46.211 00:10:46.211 21:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:46.211 21:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:46.211 21:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:46.211 21:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:46.211 21:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:46.211 21:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.211 21:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:46.470 21:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.470 21:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:46.470 { 00:10:46.470 "cntlid": 65, 00:10:46.470 "qid": 0, 00:10:46.470 "state": "enabled", 00:10:46.470 "thread": "nvmf_tgt_poll_group_000", 00:10:46.470 "listen_address": { 00:10:46.470 "trtype": "TCP", 00:10:46.470 "adrfam": "IPv4", 00:10:46.470 "traddr": "10.0.0.2", 00:10:46.470 "trsvcid": "4420" 00:10:46.470 }, 00:10:46.470 "peer_address": { 00:10:46.470 "trtype": "TCP", 00:10:46.470 "adrfam": "IPv4", 00:10:46.470 "traddr": "10.0.0.1", 00:10:46.470 "trsvcid": "33578" 00:10:46.470 }, 00:10:46.470 "auth": { 00:10:46.470 "state": "completed", 00:10:46.470 "digest": "sha384", 00:10:46.470 "dhgroup": "ffdhe3072" 00:10:46.470 } 00:10:46.470 } 00:10:46.470 ]' 00:10:46.470 21:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:46.470 21:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:46.470 21:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:46.470 21:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:46.471 21:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:46.471 21:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:46.471 21:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:46.471 21:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:46.730 21:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --hostid 987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-secret DHHC-1:00:NjEzZmJkNTFlZjY1ZTkxMjY4MzQwZjBiZDliYzU3YjZmMTYzODg2NjUyMzZkNjY1uLsNhA==: --dhchap-ctrl-secret DHHC-1:03:MzI0YmYyNDYzMjgzZDA2ZDg5Mjc3NzkyYjVmYjUxNzQ3YjE0NDAyOWQ0NzkwNTVhMWNhY2MyOTRlZDg1NWI1Mr1ssro=: 00:10:47.297 21:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:47.297 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:47.297 21:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:10:47.297 21:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.297 21:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:47.297 21:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.297 21:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:47.297 21:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:10:47.297 21:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:10:47.556 21:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:10:47.556 21:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:47.556 21:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:10:47.556 21:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:10:47.556 21:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:10:47.556 21:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:47.556 21:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:47.556 21:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.556 21:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:47.556 21:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.556 21:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:47.556 21:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:47.814 00:10:47.814 21:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:47.814 21:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:47.814 21:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:48.073 21:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:48.073 21:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:48.073 21:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.073 21:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:48.073 21:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.073 21:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:48.073 { 00:10:48.073 "cntlid": 67, 00:10:48.073 "qid": 0, 00:10:48.073 "state": "enabled", 00:10:48.073 "thread": "nvmf_tgt_poll_group_000", 00:10:48.073 "listen_address": { 00:10:48.073 "trtype": "TCP", 00:10:48.073 "adrfam": "IPv4", 00:10:48.073 "traddr": "10.0.0.2", 00:10:48.073 "trsvcid": "4420" 00:10:48.073 }, 00:10:48.073 "peer_address": { 00:10:48.073 "trtype": "TCP", 00:10:48.073 "adrfam": "IPv4", 00:10:48.073 "traddr": "10.0.0.1", 00:10:48.073 "trsvcid": "33608" 00:10:48.073 }, 00:10:48.073 "auth": { 00:10:48.073 "state": "completed", 00:10:48.073 "digest": "sha384", 00:10:48.073 "dhgroup": "ffdhe3072" 00:10:48.073 } 00:10:48.073 } 00:10:48.073 ]' 00:10:48.073 21:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:48.073 21:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:48.073 21:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:48.073 21:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:48.074 21:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:48.074 21:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:48.074 21:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:48.074 21:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:48.333 21:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --hostid 987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-secret DHHC-1:01:ZmYyMzVmZjJjNzJkNDg5YTIzMzBiMTg1Y2FmZmIyOWQN2BnV: --dhchap-ctrl-secret DHHC-1:02:ZDI5YWU2MjQ5OTBkMTFiZGU0NzE5NWFmODYwNjlkNzNlMDcwZmRiNTBlZGE0NThi0FkQ4A==: 00:10:48.899 21:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:48.899 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:48.899 21:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:10:48.899 21:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.899 21:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:48.899 21:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.899 21:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:48.899 21:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:10:48.899 21:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:10:49.158 21:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:10:49.158 21:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:49.158 21:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:10:49.158 21:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:10:49.158 21:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:10:49.158 21:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:49.158 21:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:49.158 21:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.158 21:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:49.158 21:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.158 21:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:49.158 21:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:49.416 00:10:49.416 21:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:49.416 21:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:49.416 21:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:49.675 21:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:49.675 21:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:49.675 21:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.675 21:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:49.675 21:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.675 21:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:49.675 { 00:10:49.675 "cntlid": 69, 00:10:49.675 "qid": 0, 00:10:49.675 "state": "enabled", 00:10:49.675 "thread": "nvmf_tgt_poll_group_000", 00:10:49.675 "listen_address": { 00:10:49.675 "trtype": "TCP", 00:10:49.675 "adrfam": "IPv4", 00:10:49.675 "traddr": "10.0.0.2", 00:10:49.675 "trsvcid": "4420" 00:10:49.675 }, 00:10:49.675 "peer_address": { 00:10:49.675 "trtype": "TCP", 00:10:49.675 "adrfam": "IPv4", 00:10:49.675 "traddr": "10.0.0.1", 00:10:49.675 "trsvcid": "33644" 00:10:49.675 }, 00:10:49.675 "auth": { 00:10:49.675 "state": "completed", 00:10:49.675 "digest": "sha384", 00:10:49.675 "dhgroup": "ffdhe3072" 00:10:49.675 } 00:10:49.675 } 00:10:49.675 ]' 00:10:49.675 21:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:49.675 21:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:49.675 21:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:49.675 21:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:49.675 21:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:49.675 21:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:49.675 21:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:49.675 21:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:49.935 21:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --hostid 987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-secret DHHC-1:02:YzIzM2Q4ZGEyZDFiODIwMjQ2ZjYzNjg1YzNjOGJmNTk4NTE2MTk1NjQ4NGFmM2Jhaf7tFQ==: --dhchap-ctrl-secret DHHC-1:01:ODA2MGEzNTEwNzZlZmEyZjJiZWVjOTA1YzQxMTEwYWFgE6zS: 00:10:50.504 21:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:50.504 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:50.504 21:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:10:50.504 21:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.504 21:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:50.504 21:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.504 21:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:50.504 21:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:10:50.504 21:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:10:50.797 21:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:10:50.797 21:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:50.797 21:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:10:50.797 21:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:10:50.797 21:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:10:50.797 21:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:50.797 21:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-key key3 00:10:50.797 21:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.797 21:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:50.797 21:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.797 21:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:50.797 21:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:51.068 00:10:51.068 21:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:51.068 21:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:51.068 21:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:51.326 21:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:51.326 21:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:51.326 21:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.326 21:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:51.326 21:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.327 21:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:51.327 { 00:10:51.327 "cntlid": 71, 00:10:51.327 "qid": 0, 00:10:51.327 "state": "enabled", 00:10:51.327 "thread": "nvmf_tgt_poll_group_000", 00:10:51.327 "listen_address": { 00:10:51.327 "trtype": "TCP", 00:10:51.327 "adrfam": "IPv4", 00:10:51.327 "traddr": "10.0.0.2", 00:10:51.327 "trsvcid": "4420" 00:10:51.327 }, 00:10:51.327 "peer_address": { 00:10:51.327 "trtype": "TCP", 00:10:51.327 "adrfam": "IPv4", 00:10:51.327 "traddr": "10.0.0.1", 00:10:51.327 "trsvcid": "33670" 00:10:51.327 }, 00:10:51.327 "auth": { 00:10:51.327 "state": "completed", 00:10:51.327 "digest": "sha384", 00:10:51.327 "dhgroup": "ffdhe3072" 00:10:51.327 } 00:10:51.327 } 00:10:51.327 ]' 00:10:51.327 21:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:51.327 21:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:51.327 21:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:51.327 21:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:51.327 21:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:51.585 21:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:51.585 21:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:51.585 21:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:51.585 21:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --hostid 987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-secret DHHC-1:03:Y2I1NTUzYjEzODhhN2NlZGEyZjg4YjBlNzg2YjNmY2JlYjFmYzUyMmE2M2NjYTI4NWQzZjlhYTM5YWM4YWE2Y5RqSzs=: 00:10:52.153 21:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:52.153 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:52.153 21:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:10:52.153 21:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.153 21:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:52.153 21:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.153 21:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:10:52.153 21:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:52.153 21:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:10:52.153 21:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:10:52.412 21:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:10:52.412 21:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:52.412 21:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:10:52.412 21:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:10:52.412 21:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:10:52.412 21:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:52.412 21:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:52.412 21:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.412 21:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:52.412 21:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.412 21:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:52.412 21:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:52.671 00:10:52.671 21:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:52.671 21:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:52.671 21:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:52.930 21:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:52.930 21:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:52.930 21:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.930 21:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:52.930 21:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.930 21:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:52.930 { 00:10:52.930 "cntlid": 73, 00:10:52.930 "qid": 0, 00:10:52.930 "state": "enabled", 00:10:52.930 "thread": "nvmf_tgt_poll_group_000", 00:10:52.930 "listen_address": { 00:10:52.930 "trtype": "TCP", 00:10:52.930 "adrfam": "IPv4", 00:10:52.930 "traddr": "10.0.0.2", 00:10:52.930 "trsvcid": "4420" 00:10:52.930 }, 00:10:52.930 "peer_address": { 00:10:52.930 "trtype": "TCP", 00:10:52.930 "adrfam": "IPv4", 00:10:52.930 "traddr": "10.0.0.1", 00:10:52.930 "trsvcid": "33702" 00:10:52.930 }, 00:10:52.930 "auth": { 00:10:52.930 "state": "completed", 00:10:52.930 "digest": "sha384", 00:10:52.930 "dhgroup": "ffdhe4096" 00:10:52.930 } 00:10:52.930 } 00:10:52.930 ]' 00:10:52.930 21:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:52.930 21:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:52.930 21:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:53.189 21:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:53.189 21:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:53.189 21:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:53.189 21:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:53.189 21:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:53.448 21:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --hostid 987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-secret DHHC-1:00:NjEzZmJkNTFlZjY1ZTkxMjY4MzQwZjBiZDliYzU3YjZmMTYzODg2NjUyMzZkNjY1uLsNhA==: --dhchap-ctrl-secret DHHC-1:03:MzI0YmYyNDYzMjgzZDA2ZDg5Mjc3NzkyYjVmYjUxNzQ3YjE0NDAyOWQ0NzkwNTVhMWNhY2MyOTRlZDg1NWI1Mr1ssro=: 00:10:54.016 21:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:54.016 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:54.016 21:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:10:54.016 21:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.016 21:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:54.016 21:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.016 21:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:54.016 21:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:10:54.016 21:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:10:54.016 21:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:10:54.016 21:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:54.016 21:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:10:54.016 21:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:10:54.016 21:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:10:54.016 21:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:54.016 21:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:54.016 21:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.016 21:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:54.016 21:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.016 21:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:54.016 21:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:54.583 00:10:54.583 21:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:54.583 21:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:54.583 21:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:54.583 21:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:54.583 21:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:54.583 21:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.584 21:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:54.584 21:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.584 21:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:54.584 { 00:10:54.584 "cntlid": 75, 00:10:54.584 "qid": 0, 00:10:54.584 "state": "enabled", 00:10:54.584 "thread": "nvmf_tgt_poll_group_000", 00:10:54.584 "listen_address": { 00:10:54.584 "trtype": "TCP", 00:10:54.584 "adrfam": "IPv4", 00:10:54.584 "traddr": "10.0.0.2", 00:10:54.584 "trsvcid": "4420" 00:10:54.584 }, 00:10:54.584 "peer_address": { 00:10:54.584 "trtype": "TCP", 00:10:54.584 "adrfam": "IPv4", 00:10:54.584 "traddr": "10.0.0.1", 00:10:54.584 "trsvcid": "33734" 00:10:54.584 }, 00:10:54.584 "auth": { 00:10:54.584 "state": "completed", 00:10:54.584 "digest": "sha384", 00:10:54.584 "dhgroup": "ffdhe4096" 00:10:54.584 } 00:10:54.584 } 00:10:54.584 ]' 00:10:54.584 21:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:54.584 21:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:54.584 21:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:54.843 21:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:54.843 21:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:54.843 21:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:54.843 21:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:54.843 21:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:54.843 21:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --hostid 987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-secret DHHC-1:01:ZmYyMzVmZjJjNzJkNDg5YTIzMzBiMTg1Y2FmZmIyOWQN2BnV: --dhchap-ctrl-secret DHHC-1:02:ZDI5YWU2MjQ5OTBkMTFiZGU0NzE5NWFmODYwNjlkNzNlMDcwZmRiNTBlZGE0NThi0FkQ4A==: 00:10:55.411 21:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:55.411 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:55.411 21:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:10:55.411 21:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.411 21:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:55.411 21:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.411 21:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:55.411 21:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:10:55.411 21:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:10:55.670 21:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:10:55.670 21:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:55.670 21:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:10:55.670 21:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:10:55.670 21:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:10:55.670 21:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:55.670 21:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:55.670 21:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.670 21:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:55.670 21:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.670 21:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:55.670 21:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:55.929 00:10:56.187 21:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:56.187 21:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:56.187 21:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:56.187 21:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:56.187 21:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:56.187 21:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.188 21:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:56.188 21:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.188 21:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:56.188 { 00:10:56.188 "cntlid": 77, 00:10:56.188 "qid": 0, 00:10:56.188 "state": "enabled", 00:10:56.188 "thread": "nvmf_tgt_poll_group_000", 00:10:56.188 "listen_address": { 00:10:56.188 "trtype": "TCP", 00:10:56.188 "adrfam": "IPv4", 00:10:56.188 "traddr": "10.0.0.2", 00:10:56.188 "trsvcid": "4420" 00:10:56.188 }, 00:10:56.188 "peer_address": { 00:10:56.188 "trtype": "TCP", 00:10:56.188 "adrfam": "IPv4", 00:10:56.188 "traddr": "10.0.0.1", 00:10:56.188 "trsvcid": "39832" 00:10:56.188 }, 00:10:56.188 "auth": { 00:10:56.188 "state": "completed", 00:10:56.188 "digest": "sha384", 00:10:56.188 "dhgroup": "ffdhe4096" 00:10:56.188 } 00:10:56.188 } 00:10:56.188 ]' 00:10:56.188 21:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:56.188 21:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:56.188 21:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:56.446 21:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:56.446 21:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:56.446 21:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:56.446 21:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:56.446 21:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:56.706 21:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --hostid 987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-secret DHHC-1:02:YzIzM2Q4ZGEyZDFiODIwMjQ2ZjYzNjg1YzNjOGJmNTk4NTE2MTk1NjQ4NGFmM2Jhaf7tFQ==: --dhchap-ctrl-secret DHHC-1:01:ODA2MGEzNTEwNzZlZmEyZjJiZWVjOTA1YzQxMTEwYWFgE6zS: 00:10:57.273 21:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:57.273 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:57.273 21:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:10:57.273 21:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.273 21:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:57.273 21:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.273 21:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:57.273 21:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:10:57.273 21:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:10:57.273 21:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:10:57.273 21:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:57.273 21:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:10:57.273 21:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:10:57.273 21:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:10:57.273 21:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:57.273 21:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-key key3 00:10:57.273 21:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.273 21:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:57.273 21:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.273 21:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:57.273 21:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:57.841 00:10:57.841 21:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:57.841 21:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:57.841 21:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:57.841 21:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:57.841 21:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:57.841 21:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.841 21:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:57.841 21:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.841 21:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:57.841 { 00:10:57.841 "cntlid": 79, 00:10:57.841 "qid": 0, 00:10:57.841 "state": "enabled", 00:10:57.841 "thread": "nvmf_tgt_poll_group_000", 00:10:57.841 "listen_address": { 00:10:57.841 "trtype": "TCP", 00:10:57.841 "adrfam": "IPv4", 00:10:57.841 "traddr": "10.0.0.2", 00:10:57.841 "trsvcid": "4420" 00:10:57.841 }, 00:10:57.841 "peer_address": { 00:10:57.841 "trtype": "TCP", 00:10:57.841 "adrfam": "IPv4", 00:10:57.841 "traddr": "10.0.0.1", 00:10:57.841 "trsvcid": "39852" 00:10:57.841 }, 00:10:57.841 "auth": { 00:10:57.841 "state": "completed", 00:10:57.841 "digest": "sha384", 00:10:57.841 "dhgroup": "ffdhe4096" 00:10:57.841 } 00:10:57.841 } 00:10:57.841 ]' 00:10:57.841 21:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:58.100 21:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:58.100 21:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:58.100 21:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:58.100 21:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:58.100 21:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:58.100 21:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:58.100 21:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:58.359 21:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --hostid 987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-secret DHHC-1:03:Y2I1NTUzYjEzODhhN2NlZGEyZjg4YjBlNzg2YjNmY2JlYjFmYzUyMmE2M2NjYTI4NWQzZjlhYTM5YWM4YWE2Y5RqSzs=: 00:10:58.926 21:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:58.926 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:58.926 21:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:10:58.926 21:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.926 21:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:58.926 21:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.926 21:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:10:58.926 21:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:58.926 21:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:10:58.926 21:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:10:59.185 21:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:10:59.185 21:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:59.185 21:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:10:59.185 21:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:10:59.185 21:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:10:59.185 21:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:59.185 21:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:59.185 21:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.185 21:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:59.185 21:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.185 21:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:59.186 21:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:59.443 00:10:59.443 21:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:59.443 21:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:59.443 21:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:59.701 21:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:59.701 21:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:59.701 21:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.701 21:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:59.701 21:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.701 21:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:59.701 { 00:10:59.701 "cntlid": 81, 00:10:59.701 "qid": 0, 00:10:59.701 "state": "enabled", 00:10:59.701 "thread": "nvmf_tgt_poll_group_000", 00:10:59.701 "listen_address": { 00:10:59.701 "trtype": "TCP", 00:10:59.701 "adrfam": "IPv4", 00:10:59.701 "traddr": "10.0.0.2", 00:10:59.701 "trsvcid": "4420" 00:10:59.701 }, 00:10:59.701 "peer_address": { 00:10:59.701 "trtype": "TCP", 00:10:59.701 "adrfam": "IPv4", 00:10:59.701 "traddr": "10.0.0.1", 00:10:59.701 "trsvcid": "39870" 00:10:59.701 }, 00:10:59.701 "auth": { 00:10:59.701 "state": "completed", 00:10:59.701 "digest": "sha384", 00:10:59.701 "dhgroup": "ffdhe6144" 00:10:59.701 } 00:10:59.701 } 00:10:59.701 ]' 00:10:59.701 21:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:59.701 21:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:59.701 21:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:59.701 21:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:59.701 21:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:59.959 21:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:59.959 21:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:59.959 21:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:59.959 21:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --hostid 987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-secret DHHC-1:00:NjEzZmJkNTFlZjY1ZTkxMjY4MzQwZjBiZDliYzU3YjZmMTYzODg2NjUyMzZkNjY1uLsNhA==: --dhchap-ctrl-secret DHHC-1:03:MzI0YmYyNDYzMjgzZDA2ZDg5Mjc3NzkyYjVmYjUxNzQ3YjE0NDAyOWQ0NzkwNTVhMWNhY2MyOTRlZDg1NWI1Mr1ssro=: 00:11:00.526 21:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:00.526 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:00.526 21:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:11:00.526 21:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.526 21:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:00.526 21:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.526 21:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:00.526 21:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:00.526 21:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:00.785 21:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:11:00.785 21:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:00.785 21:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:00.785 21:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:00.785 21:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:00.785 21:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:00.785 21:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:00.785 21:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.785 21:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:00.785 21:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.785 21:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:00.785 21:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:01.352 00:11:01.352 21:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:01.352 21:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:01.352 21:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:01.611 21:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:01.611 21:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:01.611 21:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.611 21:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:01.611 21:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.611 21:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:01.611 { 00:11:01.611 "cntlid": 83, 00:11:01.611 "qid": 0, 00:11:01.611 "state": "enabled", 00:11:01.611 "thread": "nvmf_tgt_poll_group_000", 00:11:01.611 "listen_address": { 00:11:01.611 "trtype": "TCP", 00:11:01.611 "adrfam": "IPv4", 00:11:01.611 "traddr": "10.0.0.2", 00:11:01.611 "trsvcid": "4420" 00:11:01.611 }, 00:11:01.611 "peer_address": { 00:11:01.611 "trtype": "TCP", 00:11:01.611 "adrfam": "IPv4", 00:11:01.611 "traddr": "10.0.0.1", 00:11:01.611 "trsvcid": "39900" 00:11:01.611 }, 00:11:01.611 "auth": { 00:11:01.611 "state": "completed", 00:11:01.611 "digest": "sha384", 00:11:01.611 "dhgroup": "ffdhe6144" 00:11:01.611 } 00:11:01.611 } 00:11:01.611 ]' 00:11:01.611 21:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:01.611 21:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:01.611 21:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:01.611 21:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:01.611 21:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:01.611 21:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:01.611 21:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:01.611 21:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:01.869 21:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --hostid 987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-secret DHHC-1:01:ZmYyMzVmZjJjNzJkNDg5YTIzMzBiMTg1Y2FmZmIyOWQN2BnV: --dhchap-ctrl-secret DHHC-1:02:ZDI5YWU2MjQ5OTBkMTFiZGU0NzE5NWFmODYwNjlkNzNlMDcwZmRiNTBlZGE0NThi0FkQ4A==: 00:11:02.435 21:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:02.435 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:02.435 21:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:11:02.435 21:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.435 21:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:02.435 21:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.435 21:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:02.435 21:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:02.435 21:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:02.693 21:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:11:02.693 21:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:02.693 21:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:02.693 21:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:02.693 21:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:02.693 21:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:02.693 21:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:02.693 21:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.693 21:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:02.693 21:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.693 21:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:02.693 21:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:02.951 00:11:02.951 21:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:02.951 21:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:02.951 21:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:03.210 21:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:03.210 21:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:03.210 21:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.210 21:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:03.210 21:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.210 21:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:03.210 { 00:11:03.210 "cntlid": 85, 00:11:03.210 "qid": 0, 00:11:03.210 "state": "enabled", 00:11:03.210 "thread": "nvmf_tgt_poll_group_000", 00:11:03.210 "listen_address": { 00:11:03.210 "trtype": "TCP", 00:11:03.210 "adrfam": "IPv4", 00:11:03.210 "traddr": "10.0.0.2", 00:11:03.210 "trsvcid": "4420" 00:11:03.210 }, 00:11:03.210 "peer_address": { 00:11:03.210 "trtype": "TCP", 00:11:03.210 "adrfam": "IPv4", 00:11:03.210 "traddr": "10.0.0.1", 00:11:03.210 "trsvcid": "39934" 00:11:03.210 }, 00:11:03.210 "auth": { 00:11:03.210 "state": "completed", 00:11:03.210 "digest": "sha384", 00:11:03.210 "dhgroup": "ffdhe6144" 00:11:03.210 } 00:11:03.210 } 00:11:03.210 ]' 00:11:03.210 21:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:03.210 21:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:03.210 21:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:03.468 21:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:03.468 21:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:03.468 21:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:03.468 21:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:03.468 21:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:03.727 21:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --hostid 987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-secret DHHC-1:02:YzIzM2Q4ZGEyZDFiODIwMjQ2ZjYzNjg1YzNjOGJmNTk4NTE2MTk1NjQ4NGFmM2Jhaf7tFQ==: --dhchap-ctrl-secret DHHC-1:01:ODA2MGEzNTEwNzZlZmEyZjJiZWVjOTA1YzQxMTEwYWFgE6zS: 00:11:04.293 21:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:04.293 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:04.293 21:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:11:04.293 21:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.293 21:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:04.293 21:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.293 21:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:04.293 21:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:04.293 21:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:04.293 21:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:11:04.293 21:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:04.293 21:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:04.293 21:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:04.293 21:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:04.293 21:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:04.293 21:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-key key3 00:11:04.293 21:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.293 21:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:04.293 21:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.293 21:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:04.294 21:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:04.860 00:11:04.860 21:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:04.860 21:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:04.860 21:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:05.119 21:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:05.119 21:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:05.119 21:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.119 21:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:05.119 21:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.119 21:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:05.119 { 00:11:05.119 "cntlid": 87, 00:11:05.119 "qid": 0, 00:11:05.119 "state": "enabled", 00:11:05.119 "thread": "nvmf_tgt_poll_group_000", 00:11:05.119 "listen_address": { 00:11:05.119 "trtype": "TCP", 00:11:05.119 "adrfam": "IPv4", 00:11:05.119 "traddr": "10.0.0.2", 00:11:05.119 "trsvcid": "4420" 00:11:05.119 }, 00:11:05.119 "peer_address": { 00:11:05.119 "trtype": "TCP", 00:11:05.119 "adrfam": "IPv4", 00:11:05.119 "traddr": "10.0.0.1", 00:11:05.119 "trsvcid": "39954" 00:11:05.119 }, 00:11:05.119 "auth": { 00:11:05.119 "state": "completed", 00:11:05.119 "digest": "sha384", 00:11:05.119 "dhgroup": "ffdhe6144" 00:11:05.119 } 00:11:05.119 } 00:11:05.119 ]' 00:11:05.119 21:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:05.119 21:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:05.119 21:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:05.119 21:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:05.119 21:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:05.119 21:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:05.119 21:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:05.119 21:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:05.377 21:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --hostid 987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-secret DHHC-1:03:Y2I1NTUzYjEzODhhN2NlZGEyZjg4YjBlNzg2YjNmY2JlYjFmYzUyMmE2M2NjYTI4NWQzZjlhYTM5YWM4YWE2Y5RqSzs=: 00:11:05.944 21:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:05.944 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:05.944 21:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:11:05.944 21:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.944 21:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:05.944 21:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.944 21:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:05.944 21:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:05.944 21:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:05.944 21:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:06.203 21:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:11:06.203 21:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:06.203 21:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:06.203 21:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:06.203 21:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:06.203 21:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:06.203 21:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:06.203 21:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.203 21:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:06.203 21:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.203 21:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:06.203 21:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:06.771 00:11:06.771 21:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:06.771 21:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:06.771 21:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:07.030 21:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:07.030 21:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:07.030 21:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.030 21:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:07.030 21:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.030 21:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:07.030 { 00:11:07.030 "cntlid": 89, 00:11:07.030 "qid": 0, 00:11:07.030 "state": "enabled", 00:11:07.030 "thread": "nvmf_tgt_poll_group_000", 00:11:07.030 "listen_address": { 00:11:07.030 "trtype": "TCP", 00:11:07.030 "adrfam": "IPv4", 00:11:07.030 "traddr": "10.0.0.2", 00:11:07.030 "trsvcid": "4420" 00:11:07.030 }, 00:11:07.030 "peer_address": { 00:11:07.030 "trtype": "TCP", 00:11:07.030 "adrfam": "IPv4", 00:11:07.030 "traddr": "10.0.0.1", 00:11:07.030 "trsvcid": "47240" 00:11:07.030 }, 00:11:07.030 "auth": { 00:11:07.030 "state": "completed", 00:11:07.030 "digest": "sha384", 00:11:07.030 "dhgroup": "ffdhe8192" 00:11:07.030 } 00:11:07.030 } 00:11:07.030 ]' 00:11:07.030 21:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:07.030 21:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:07.030 21:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:07.030 21:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:07.030 21:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:07.030 21:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:07.030 21:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:07.030 21:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:07.289 21:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --hostid 987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-secret DHHC-1:00:NjEzZmJkNTFlZjY1ZTkxMjY4MzQwZjBiZDliYzU3YjZmMTYzODg2NjUyMzZkNjY1uLsNhA==: --dhchap-ctrl-secret DHHC-1:03:MzI0YmYyNDYzMjgzZDA2ZDg5Mjc3NzkyYjVmYjUxNzQ3YjE0NDAyOWQ0NzkwNTVhMWNhY2MyOTRlZDg1NWI1Mr1ssro=: 00:11:08.222 21:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:08.222 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:08.222 21:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:11:08.222 21:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.222 21:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:08.222 21:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.222 21:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:08.222 21:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:08.222 21:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:08.222 21:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:11:08.222 21:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:08.222 21:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:08.222 21:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:08.222 21:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:08.222 21:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:08.222 21:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:08.222 21:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.222 21:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:08.222 21:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.222 21:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:08.222 21:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:08.789 00:11:08.789 21:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:08.789 21:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:08.789 21:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:09.048 21:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:09.048 21:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:09.048 21:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.048 21:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:09.048 21:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.048 21:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:09.048 { 00:11:09.048 "cntlid": 91, 00:11:09.048 "qid": 0, 00:11:09.048 "state": "enabled", 00:11:09.048 "thread": "nvmf_tgt_poll_group_000", 00:11:09.048 "listen_address": { 00:11:09.048 "trtype": "TCP", 00:11:09.048 "adrfam": "IPv4", 00:11:09.048 "traddr": "10.0.0.2", 00:11:09.048 "trsvcid": "4420" 00:11:09.048 }, 00:11:09.048 "peer_address": { 00:11:09.048 "trtype": "TCP", 00:11:09.048 "adrfam": "IPv4", 00:11:09.048 "traddr": "10.0.0.1", 00:11:09.048 "trsvcid": "47268" 00:11:09.048 }, 00:11:09.048 "auth": { 00:11:09.048 "state": "completed", 00:11:09.048 "digest": "sha384", 00:11:09.048 "dhgroup": "ffdhe8192" 00:11:09.048 } 00:11:09.048 } 00:11:09.048 ]' 00:11:09.048 21:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:09.306 21:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:09.306 21:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:09.306 21:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:09.306 21:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:09.306 21:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:09.306 21:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:09.306 21:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:09.564 21:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --hostid 987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-secret DHHC-1:01:ZmYyMzVmZjJjNzJkNDg5YTIzMzBiMTg1Y2FmZmIyOWQN2BnV: --dhchap-ctrl-secret DHHC-1:02:ZDI5YWU2MjQ5OTBkMTFiZGU0NzE5NWFmODYwNjlkNzNlMDcwZmRiNTBlZGE0NThi0FkQ4A==: 00:11:10.132 21:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:10.132 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:10.132 21:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:11:10.132 21:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.132 21:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:10.132 21:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.132 21:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:10.132 21:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:10.132 21:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:10.132 21:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:11:10.132 21:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:10.132 21:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:10.132 21:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:10.132 21:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:10.132 21:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:10.132 21:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:10.132 21:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.132 21:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:10.132 21:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.132 21:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:10.132 21:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:10.699 00:11:10.699 21:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:10.699 21:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:10.699 21:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:10.958 21:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:10.958 21:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:10.958 21:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.958 21:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:10.958 21:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.958 21:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:10.958 { 00:11:10.958 "cntlid": 93, 00:11:10.958 "qid": 0, 00:11:10.958 "state": "enabled", 00:11:10.958 "thread": "nvmf_tgt_poll_group_000", 00:11:10.958 "listen_address": { 00:11:10.958 "trtype": "TCP", 00:11:10.958 "adrfam": "IPv4", 00:11:10.958 "traddr": "10.0.0.2", 00:11:10.958 "trsvcid": "4420" 00:11:10.958 }, 00:11:10.958 "peer_address": { 00:11:10.958 "trtype": "TCP", 00:11:10.958 "adrfam": "IPv4", 00:11:10.958 "traddr": "10.0.0.1", 00:11:10.958 "trsvcid": "47308" 00:11:10.958 }, 00:11:10.958 "auth": { 00:11:10.958 "state": "completed", 00:11:10.958 "digest": "sha384", 00:11:10.958 "dhgroup": "ffdhe8192" 00:11:10.958 } 00:11:10.958 } 00:11:10.958 ]' 00:11:10.958 21:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:11.216 21:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:11.216 21:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:11.216 21:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:11.216 21:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:11.216 21:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:11.216 21:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:11.216 21:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:11.475 21:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --hostid 987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-secret DHHC-1:02:YzIzM2Q4ZGEyZDFiODIwMjQ2ZjYzNjg1YzNjOGJmNTk4NTE2MTk1NjQ4NGFmM2Jhaf7tFQ==: --dhchap-ctrl-secret DHHC-1:01:ODA2MGEzNTEwNzZlZmEyZjJiZWVjOTA1YzQxMTEwYWFgE6zS: 00:11:12.043 21:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:12.043 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:12.043 21:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:11:12.043 21:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.043 21:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:12.043 21:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.043 21:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:12.043 21:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:12.043 21:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:12.301 21:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:11:12.301 21:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:12.301 21:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:12.301 21:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:12.301 21:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:12.301 21:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:12.301 21:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-key key3 00:11:12.301 21:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.301 21:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:12.301 21:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.301 21:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:12.301 21:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:12.868 00:11:12.868 21:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:12.868 21:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:12.868 21:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:12.868 21:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:12.868 21:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:12.868 21:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.868 21:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:13.127 21:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.127 21:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:13.127 { 00:11:13.127 "cntlid": 95, 00:11:13.127 "qid": 0, 00:11:13.127 "state": "enabled", 00:11:13.127 "thread": "nvmf_tgt_poll_group_000", 00:11:13.127 "listen_address": { 00:11:13.127 "trtype": "TCP", 00:11:13.127 "adrfam": "IPv4", 00:11:13.127 "traddr": "10.0.0.2", 00:11:13.127 "trsvcid": "4420" 00:11:13.127 }, 00:11:13.127 "peer_address": { 00:11:13.127 "trtype": "TCP", 00:11:13.127 "adrfam": "IPv4", 00:11:13.127 "traddr": "10.0.0.1", 00:11:13.127 "trsvcid": "47342" 00:11:13.127 }, 00:11:13.127 "auth": { 00:11:13.127 "state": "completed", 00:11:13.127 "digest": "sha384", 00:11:13.127 "dhgroup": "ffdhe8192" 00:11:13.127 } 00:11:13.127 } 00:11:13.127 ]' 00:11:13.127 21:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:13.127 21:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:13.127 21:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:13.127 21:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:13.127 21:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:13.127 21:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:13.127 21:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:13.127 21:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:13.386 21:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --hostid 987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-secret DHHC-1:03:Y2I1NTUzYjEzODhhN2NlZGEyZjg4YjBlNzg2YjNmY2JlYjFmYzUyMmE2M2NjYTI4NWQzZjlhYTM5YWM4YWE2Y5RqSzs=: 00:11:13.953 21:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:13.953 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:13.953 21:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:11:13.953 21:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.953 21:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:13.953 21:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.953 21:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:11:13.953 21:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:13.953 21:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:13.953 21:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:13.953 21:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:14.212 21:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:11:14.212 21:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:14.212 21:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:14.212 21:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:14.212 21:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:14.212 21:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:14.212 21:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:14.212 21:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.212 21:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:14.212 21:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.212 21:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:14.212 21:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:14.471 00:11:14.471 21:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:14.471 21:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:14.471 21:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:14.471 21:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:14.471 21:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:14.471 21:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.471 21:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:14.472 21:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.472 21:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:14.472 { 00:11:14.472 "cntlid": 97, 00:11:14.472 "qid": 0, 00:11:14.472 "state": "enabled", 00:11:14.472 "thread": "nvmf_tgt_poll_group_000", 00:11:14.472 "listen_address": { 00:11:14.472 "trtype": "TCP", 00:11:14.472 "adrfam": "IPv4", 00:11:14.472 "traddr": "10.0.0.2", 00:11:14.472 "trsvcid": "4420" 00:11:14.472 }, 00:11:14.472 "peer_address": { 00:11:14.472 "trtype": "TCP", 00:11:14.472 "adrfam": "IPv4", 00:11:14.472 "traddr": "10.0.0.1", 00:11:14.472 "trsvcid": "47366" 00:11:14.472 }, 00:11:14.472 "auth": { 00:11:14.472 "state": "completed", 00:11:14.472 "digest": "sha512", 00:11:14.472 "dhgroup": "null" 00:11:14.472 } 00:11:14.472 } 00:11:14.472 ]' 00:11:14.472 21:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:14.731 21:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:14.731 21:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:14.731 21:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:14.731 21:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:14.731 21:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:14.731 21:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:14.731 21:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:14.990 21:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --hostid 987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-secret DHHC-1:00:NjEzZmJkNTFlZjY1ZTkxMjY4MzQwZjBiZDliYzU3YjZmMTYzODg2NjUyMzZkNjY1uLsNhA==: --dhchap-ctrl-secret DHHC-1:03:MzI0YmYyNDYzMjgzZDA2ZDg5Mjc3NzkyYjVmYjUxNzQ3YjE0NDAyOWQ0NzkwNTVhMWNhY2MyOTRlZDg1NWI1Mr1ssro=: 00:11:15.611 21:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:15.611 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:15.611 21:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:11:15.611 21:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.611 21:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:15.611 21:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.611 21:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:15.611 21:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:15.611 21:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:15.611 21:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:11:15.611 21:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:15.611 21:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:15.611 21:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:15.611 21:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:15.611 21:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:15.611 21:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:15.611 21:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.611 21:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:15.611 21:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.612 21:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:15.612 21:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:15.870 00:11:15.870 21:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:15.870 21:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:15.870 21:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:16.128 21:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:16.128 21:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:16.128 21:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.128 21:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:16.128 21:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.128 21:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:16.128 { 00:11:16.128 "cntlid": 99, 00:11:16.128 "qid": 0, 00:11:16.128 "state": "enabled", 00:11:16.128 "thread": "nvmf_tgt_poll_group_000", 00:11:16.128 "listen_address": { 00:11:16.128 "trtype": "TCP", 00:11:16.128 "adrfam": "IPv4", 00:11:16.128 "traddr": "10.0.0.2", 00:11:16.128 "trsvcid": "4420" 00:11:16.128 }, 00:11:16.128 "peer_address": { 00:11:16.128 "trtype": "TCP", 00:11:16.128 "adrfam": "IPv4", 00:11:16.128 "traddr": "10.0.0.1", 00:11:16.128 "trsvcid": "49320" 00:11:16.128 }, 00:11:16.128 "auth": { 00:11:16.128 "state": "completed", 00:11:16.128 "digest": "sha512", 00:11:16.128 "dhgroup": "null" 00:11:16.128 } 00:11:16.128 } 00:11:16.128 ]' 00:11:16.128 21:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:16.128 21:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:16.128 21:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:16.128 21:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:16.128 21:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:16.387 21:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:16.387 21:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:16.387 21:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:16.387 21:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --hostid 987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-secret DHHC-1:01:ZmYyMzVmZjJjNzJkNDg5YTIzMzBiMTg1Y2FmZmIyOWQN2BnV: --dhchap-ctrl-secret DHHC-1:02:ZDI5YWU2MjQ5OTBkMTFiZGU0NzE5NWFmODYwNjlkNzNlMDcwZmRiNTBlZGE0NThi0FkQ4A==: 00:11:16.954 21:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:16.954 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:16.954 21:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:11:16.954 21:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.954 21:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:16.954 21:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.954 21:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:16.954 21:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:16.954 21:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:17.213 21:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:11:17.213 21:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:17.213 21:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:17.213 21:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:17.213 21:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:17.213 21:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:17.213 21:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:17.213 21:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.213 21:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:17.213 21:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.213 21:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:17.213 21:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:17.471 00:11:17.471 21:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:17.471 21:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:17.471 21:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:17.730 21:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:17.730 21:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:17.730 21:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.730 21:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:17.730 21:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.730 21:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:17.730 { 00:11:17.730 "cntlid": 101, 00:11:17.730 "qid": 0, 00:11:17.730 "state": "enabled", 00:11:17.730 "thread": "nvmf_tgt_poll_group_000", 00:11:17.730 "listen_address": { 00:11:17.730 "trtype": "TCP", 00:11:17.730 "adrfam": "IPv4", 00:11:17.730 "traddr": "10.0.0.2", 00:11:17.730 "trsvcid": "4420" 00:11:17.730 }, 00:11:17.730 "peer_address": { 00:11:17.730 "trtype": "TCP", 00:11:17.730 "adrfam": "IPv4", 00:11:17.730 "traddr": "10.0.0.1", 00:11:17.730 "trsvcid": "49344" 00:11:17.730 }, 00:11:17.730 "auth": { 00:11:17.730 "state": "completed", 00:11:17.730 "digest": "sha512", 00:11:17.730 "dhgroup": "null" 00:11:17.730 } 00:11:17.730 } 00:11:17.730 ]' 00:11:17.730 21:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:17.730 21:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:17.730 21:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:17.988 21:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:17.988 21:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:17.988 21:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:17.988 21:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:17.988 21:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:18.246 21:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --hostid 987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-secret DHHC-1:02:YzIzM2Q4ZGEyZDFiODIwMjQ2ZjYzNjg1YzNjOGJmNTk4NTE2MTk1NjQ4NGFmM2Jhaf7tFQ==: --dhchap-ctrl-secret DHHC-1:01:ODA2MGEzNTEwNzZlZmEyZjJiZWVjOTA1YzQxMTEwYWFgE6zS: 00:11:18.811 21:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:18.811 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:18.811 21:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:11:18.811 21:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.811 21:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:18.811 21:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.811 21:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:18.811 21:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:18.811 21:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:18.811 21:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:11:18.811 21:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:18.811 21:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:18.811 21:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:18.811 21:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:18.811 21:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:18.811 21:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-key key3 00:11:18.811 21:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.811 21:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:18.811 21:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.811 21:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:18.811 21:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:19.069 00:11:19.069 21:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:19.069 21:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:19.069 21:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:19.327 21:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:19.327 21:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:19.327 21:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.327 21:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:19.327 21:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.327 21:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:19.327 { 00:11:19.327 "cntlid": 103, 00:11:19.327 "qid": 0, 00:11:19.327 "state": "enabled", 00:11:19.327 "thread": "nvmf_tgt_poll_group_000", 00:11:19.327 "listen_address": { 00:11:19.327 "trtype": "TCP", 00:11:19.327 "adrfam": "IPv4", 00:11:19.327 "traddr": "10.0.0.2", 00:11:19.327 "trsvcid": "4420" 00:11:19.327 }, 00:11:19.327 "peer_address": { 00:11:19.327 "trtype": "TCP", 00:11:19.327 "adrfam": "IPv4", 00:11:19.327 "traddr": "10.0.0.1", 00:11:19.327 "trsvcid": "49372" 00:11:19.327 }, 00:11:19.327 "auth": { 00:11:19.327 "state": "completed", 00:11:19.327 "digest": "sha512", 00:11:19.327 "dhgroup": "null" 00:11:19.327 } 00:11:19.327 } 00:11:19.327 ]' 00:11:19.327 21:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:19.327 21:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:19.327 21:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:19.586 21:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:19.586 21:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:19.586 21:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:19.586 21:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:19.586 21:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:19.586 21:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --hostid 987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-secret DHHC-1:03:Y2I1NTUzYjEzODhhN2NlZGEyZjg4YjBlNzg2YjNmY2JlYjFmYzUyMmE2M2NjYTI4NWQzZjlhYTM5YWM4YWE2Y5RqSzs=: 00:11:20.154 21:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:20.154 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:20.154 21:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:11:20.154 21:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.154 21:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:20.154 21:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.154 21:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:20.154 21:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:20.154 21:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:20.154 21:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:20.412 21:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:11:20.412 21:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:20.412 21:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:20.412 21:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:20.413 21:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:20.413 21:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:20.413 21:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:20.413 21:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.413 21:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:20.413 21:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.413 21:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:20.413 21:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:20.671 00:11:20.671 21:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:20.671 21:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:20.671 21:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:20.930 21:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:20.930 21:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:20.930 21:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.930 21:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:20.930 21:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.930 21:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:20.930 { 00:11:20.930 "cntlid": 105, 00:11:20.930 "qid": 0, 00:11:20.930 "state": "enabled", 00:11:20.930 "thread": "nvmf_tgt_poll_group_000", 00:11:20.930 "listen_address": { 00:11:20.930 "trtype": "TCP", 00:11:20.930 "adrfam": "IPv4", 00:11:20.930 "traddr": "10.0.0.2", 00:11:20.930 "trsvcid": "4420" 00:11:20.930 }, 00:11:20.930 "peer_address": { 00:11:20.930 "trtype": "TCP", 00:11:20.930 "adrfam": "IPv4", 00:11:20.930 "traddr": "10.0.0.1", 00:11:20.930 "trsvcid": "49402" 00:11:20.930 }, 00:11:20.930 "auth": { 00:11:20.930 "state": "completed", 00:11:20.930 "digest": "sha512", 00:11:20.930 "dhgroup": "ffdhe2048" 00:11:20.930 } 00:11:20.930 } 00:11:20.930 ]' 00:11:20.930 21:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:20.930 21:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:20.930 21:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:20.930 21:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:20.930 21:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:21.188 21:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:21.188 21:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:21.189 21:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:21.189 21:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --hostid 987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-secret DHHC-1:00:NjEzZmJkNTFlZjY1ZTkxMjY4MzQwZjBiZDliYzU3YjZmMTYzODg2NjUyMzZkNjY1uLsNhA==: --dhchap-ctrl-secret DHHC-1:03:MzI0YmYyNDYzMjgzZDA2ZDg5Mjc3NzkyYjVmYjUxNzQ3YjE0NDAyOWQ0NzkwNTVhMWNhY2MyOTRlZDg1NWI1Mr1ssro=: 00:11:21.756 21:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:21.756 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:21.756 21:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:11:21.756 21:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.756 21:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:21.756 21:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.756 21:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:21.756 21:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:21.756 21:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:22.014 21:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:11:22.014 21:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:22.014 21:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:22.014 21:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:22.014 21:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:22.014 21:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:22.014 21:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:22.014 21:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.014 21:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:22.014 21:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.014 21:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:22.014 21:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:22.273 00:11:22.273 21:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:22.273 21:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:22.273 21:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:22.533 21:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:22.533 21:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:22.533 21:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.533 21:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:22.533 21:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.533 21:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:22.533 { 00:11:22.533 "cntlid": 107, 00:11:22.533 "qid": 0, 00:11:22.533 "state": "enabled", 00:11:22.533 "thread": "nvmf_tgt_poll_group_000", 00:11:22.533 "listen_address": { 00:11:22.533 "trtype": "TCP", 00:11:22.533 "adrfam": "IPv4", 00:11:22.533 "traddr": "10.0.0.2", 00:11:22.533 "trsvcid": "4420" 00:11:22.533 }, 00:11:22.533 "peer_address": { 00:11:22.533 "trtype": "TCP", 00:11:22.533 "adrfam": "IPv4", 00:11:22.533 "traddr": "10.0.0.1", 00:11:22.533 "trsvcid": "49422" 00:11:22.533 }, 00:11:22.533 "auth": { 00:11:22.533 "state": "completed", 00:11:22.533 "digest": "sha512", 00:11:22.533 "dhgroup": "ffdhe2048" 00:11:22.533 } 00:11:22.533 } 00:11:22.533 ]' 00:11:22.533 21:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:22.533 21:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:22.533 21:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:22.533 21:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:22.533 21:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:22.533 21:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:22.533 21:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:22.533 21:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:22.792 21:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --hostid 987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-secret DHHC-1:01:ZmYyMzVmZjJjNzJkNDg5YTIzMzBiMTg1Y2FmZmIyOWQN2BnV: --dhchap-ctrl-secret DHHC-1:02:ZDI5YWU2MjQ5OTBkMTFiZGU0NzE5NWFmODYwNjlkNzNlMDcwZmRiNTBlZGE0NThi0FkQ4A==: 00:11:23.359 21:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:23.359 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:23.359 21:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:11:23.359 21:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.359 21:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:23.359 21:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.359 21:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:23.359 21:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:23.359 21:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:23.618 21:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:11:23.618 21:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:23.618 21:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:23.618 21:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:23.618 21:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:23.618 21:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:23.618 21:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:23.618 21:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.618 21:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:23.618 21:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.618 21:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:23.619 21:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:23.877 00:11:23.877 21:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:23.877 21:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:23.877 21:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:24.136 21:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:24.136 21:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:24.136 21:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.136 21:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:24.136 21:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.136 21:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:24.136 { 00:11:24.136 "cntlid": 109, 00:11:24.136 "qid": 0, 00:11:24.136 "state": "enabled", 00:11:24.136 "thread": "nvmf_tgt_poll_group_000", 00:11:24.136 "listen_address": { 00:11:24.136 "trtype": "TCP", 00:11:24.136 "adrfam": "IPv4", 00:11:24.136 "traddr": "10.0.0.2", 00:11:24.136 "trsvcid": "4420" 00:11:24.136 }, 00:11:24.136 "peer_address": { 00:11:24.136 "trtype": "TCP", 00:11:24.136 "adrfam": "IPv4", 00:11:24.136 "traddr": "10.0.0.1", 00:11:24.136 "trsvcid": "49452" 00:11:24.136 }, 00:11:24.136 "auth": { 00:11:24.136 "state": "completed", 00:11:24.136 "digest": "sha512", 00:11:24.136 "dhgroup": "ffdhe2048" 00:11:24.136 } 00:11:24.136 } 00:11:24.136 ]' 00:11:24.136 21:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:24.136 21:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:24.136 21:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:24.136 21:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:24.136 21:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:24.395 21:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:24.395 21:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:24.395 21:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:24.654 21:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --hostid 987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-secret DHHC-1:02:YzIzM2Q4ZGEyZDFiODIwMjQ2ZjYzNjg1YzNjOGJmNTk4NTE2MTk1NjQ4NGFmM2Jhaf7tFQ==: --dhchap-ctrl-secret DHHC-1:01:ODA2MGEzNTEwNzZlZmEyZjJiZWVjOTA1YzQxMTEwYWFgE6zS: 00:11:25.222 21:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:25.222 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:25.222 21:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:11:25.222 21:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.222 21:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:25.222 21:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.222 21:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:25.222 21:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:25.222 21:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:25.222 21:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:11:25.222 21:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:25.222 21:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:25.222 21:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:25.222 21:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:25.222 21:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:25.222 21:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-key key3 00:11:25.222 21:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.223 21:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:25.223 21:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.223 21:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:25.223 21:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:25.481 00:11:25.481 21:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:25.481 21:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:25.481 21:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:25.740 21:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:25.740 21:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:25.740 21:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.740 21:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:25.740 21:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.740 21:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:25.740 { 00:11:25.740 "cntlid": 111, 00:11:25.740 "qid": 0, 00:11:25.740 "state": "enabled", 00:11:25.740 "thread": "nvmf_tgt_poll_group_000", 00:11:25.740 "listen_address": { 00:11:25.740 "trtype": "TCP", 00:11:25.740 "adrfam": "IPv4", 00:11:25.740 "traddr": "10.0.0.2", 00:11:25.740 "trsvcid": "4420" 00:11:25.740 }, 00:11:25.740 "peer_address": { 00:11:25.740 "trtype": "TCP", 00:11:25.740 "adrfam": "IPv4", 00:11:25.740 "traddr": "10.0.0.1", 00:11:25.740 "trsvcid": "33126" 00:11:25.740 }, 00:11:25.740 "auth": { 00:11:25.740 "state": "completed", 00:11:25.740 "digest": "sha512", 00:11:25.740 "dhgroup": "ffdhe2048" 00:11:25.740 } 00:11:25.740 } 00:11:25.740 ]' 00:11:25.741 21:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:25.741 21:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:25.741 21:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:25.741 21:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:25.741 21:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:25.741 21:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:25.741 21:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:25.741 21:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:25.999 21:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --hostid 987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-secret DHHC-1:03:Y2I1NTUzYjEzODhhN2NlZGEyZjg4YjBlNzg2YjNmY2JlYjFmYzUyMmE2M2NjYTI4NWQzZjlhYTM5YWM4YWE2Y5RqSzs=: 00:11:26.567 21:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:26.567 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:26.567 21:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:11:26.567 21:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.567 21:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:26.567 21:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.567 21:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:26.567 21:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:26.567 21:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:26.567 21:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:26.826 21:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:11:26.826 21:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:26.826 21:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:26.826 21:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:26.826 21:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:26.826 21:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:26.826 21:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:26.826 21:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.826 21:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:26.826 21:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.826 21:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:26.826 21:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:27.085 00:11:27.085 21:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:27.085 21:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:27.085 21:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:27.344 21:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:27.344 21:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:27.344 21:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.344 21:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:27.344 21:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.344 21:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:27.344 { 00:11:27.344 "cntlid": 113, 00:11:27.344 "qid": 0, 00:11:27.344 "state": "enabled", 00:11:27.344 "thread": "nvmf_tgt_poll_group_000", 00:11:27.344 "listen_address": { 00:11:27.344 "trtype": "TCP", 00:11:27.344 "adrfam": "IPv4", 00:11:27.344 "traddr": "10.0.0.2", 00:11:27.344 "trsvcid": "4420" 00:11:27.344 }, 00:11:27.344 "peer_address": { 00:11:27.344 "trtype": "TCP", 00:11:27.344 "adrfam": "IPv4", 00:11:27.344 "traddr": "10.0.0.1", 00:11:27.344 "trsvcid": "33148" 00:11:27.344 }, 00:11:27.344 "auth": { 00:11:27.344 "state": "completed", 00:11:27.344 "digest": "sha512", 00:11:27.344 "dhgroup": "ffdhe3072" 00:11:27.344 } 00:11:27.344 } 00:11:27.344 ]' 00:11:27.344 21:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:27.344 21:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:27.344 21:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:27.344 21:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:27.344 21:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:27.344 21:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:27.344 21:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:27.344 21:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:27.603 21:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --hostid 987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-secret DHHC-1:00:NjEzZmJkNTFlZjY1ZTkxMjY4MzQwZjBiZDliYzU3YjZmMTYzODg2NjUyMzZkNjY1uLsNhA==: --dhchap-ctrl-secret DHHC-1:03:MzI0YmYyNDYzMjgzZDA2ZDg5Mjc3NzkyYjVmYjUxNzQ3YjE0NDAyOWQ0NzkwNTVhMWNhY2MyOTRlZDg1NWI1Mr1ssro=: 00:11:28.170 21:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:28.170 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:28.170 21:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:11:28.170 21:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.170 21:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:28.171 21:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.171 21:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:28.171 21:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:28.171 21:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:28.429 21:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:11:28.429 21:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:28.429 21:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:28.429 21:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:28.430 21:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:28.430 21:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:28.430 21:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:28.430 21:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.430 21:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:28.430 21:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.430 21:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:28.430 21:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:28.997 00:11:28.997 21:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:28.997 21:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:28.997 21:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:28.997 21:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:28.997 21:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:28.997 21:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.997 21:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:29.256 21:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.256 21:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:29.256 { 00:11:29.256 "cntlid": 115, 00:11:29.256 "qid": 0, 00:11:29.256 "state": "enabled", 00:11:29.256 "thread": "nvmf_tgt_poll_group_000", 00:11:29.256 "listen_address": { 00:11:29.256 "trtype": "TCP", 00:11:29.256 "adrfam": "IPv4", 00:11:29.256 "traddr": "10.0.0.2", 00:11:29.256 "trsvcid": "4420" 00:11:29.256 }, 00:11:29.256 "peer_address": { 00:11:29.256 "trtype": "TCP", 00:11:29.256 "adrfam": "IPv4", 00:11:29.256 "traddr": "10.0.0.1", 00:11:29.256 "trsvcid": "33186" 00:11:29.256 }, 00:11:29.256 "auth": { 00:11:29.256 "state": "completed", 00:11:29.256 "digest": "sha512", 00:11:29.256 "dhgroup": "ffdhe3072" 00:11:29.256 } 00:11:29.256 } 00:11:29.256 ]' 00:11:29.256 21:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:29.256 21:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:29.256 21:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:29.256 21:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:29.256 21:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:29.256 21:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:29.256 21:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:29.256 21:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:29.514 21:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --hostid 987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-secret DHHC-1:01:ZmYyMzVmZjJjNzJkNDg5YTIzMzBiMTg1Y2FmZmIyOWQN2BnV: --dhchap-ctrl-secret DHHC-1:02:ZDI5YWU2MjQ5OTBkMTFiZGU0NzE5NWFmODYwNjlkNzNlMDcwZmRiNTBlZGE0NThi0FkQ4A==: 00:11:30.081 21:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:30.081 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:30.081 21:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:11:30.081 21:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.081 21:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:30.081 21:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.081 21:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:30.081 21:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:30.081 21:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:30.339 21:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:11:30.339 21:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:30.339 21:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:30.339 21:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:30.339 21:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:30.339 21:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:30.339 21:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:30.339 21:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.339 21:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:30.339 21:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.339 21:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:30.339 21:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:30.598 00:11:30.598 21:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:30.598 21:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:30.598 21:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:30.857 21:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:30.857 21:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:30.857 21:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.857 21:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:30.857 21:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.857 21:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:30.857 { 00:11:30.857 "cntlid": 117, 00:11:30.857 "qid": 0, 00:11:30.857 "state": "enabled", 00:11:30.857 "thread": "nvmf_tgt_poll_group_000", 00:11:30.857 "listen_address": { 00:11:30.857 "trtype": "TCP", 00:11:30.857 "adrfam": "IPv4", 00:11:30.857 "traddr": "10.0.0.2", 00:11:30.857 "trsvcid": "4420" 00:11:30.857 }, 00:11:30.857 "peer_address": { 00:11:30.857 "trtype": "TCP", 00:11:30.857 "adrfam": "IPv4", 00:11:30.857 "traddr": "10.0.0.1", 00:11:30.857 "trsvcid": "33228" 00:11:30.857 }, 00:11:30.857 "auth": { 00:11:30.857 "state": "completed", 00:11:30.857 "digest": "sha512", 00:11:30.857 "dhgroup": "ffdhe3072" 00:11:30.857 } 00:11:30.857 } 00:11:30.857 ]' 00:11:30.857 21:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:30.857 21:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:30.857 21:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:30.857 21:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:30.857 21:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:30.857 21:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:30.857 21:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:30.857 21:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:31.116 21:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --hostid 987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-secret DHHC-1:02:YzIzM2Q4ZGEyZDFiODIwMjQ2ZjYzNjg1YzNjOGJmNTk4NTE2MTk1NjQ4NGFmM2Jhaf7tFQ==: --dhchap-ctrl-secret DHHC-1:01:ODA2MGEzNTEwNzZlZmEyZjJiZWVjOTA1YzQxMTEwYWFgE6zS: 00:11:31.685 21:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:31.685 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:31.685 21:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:11:31.685 21:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.685 21:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:31.685 21:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.685 21:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:31.685 21:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:31.685 21:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:31.944 21:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:11:31.944 21:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:31.944 21:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:31.944 21:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:31.944 21:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:31.944 21:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:31.944 21:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-key key3 00:11:31.944 21:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.944 21:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:31.944 21:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.944 21:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:31.944 21:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:32.203 00:11:32.203 21:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:32.203 21:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:32.203 21:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:32.462 21:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:32.462 21:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:32.462 21:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.462 21:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:32.462 21:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.462 21:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:32.462 { 00:11:32.462 "cntlid": 119, 00:11:32.462 "qid": 0, 00:11:32.462 "state": "enabled", 00:11:32.462 "thread": "nvmf_tgt_poll_group_000", 00:11:32.462 "listen_address": { 00:11:32.462 "trtype": "TCP", 00:11:32.462 "adrfam": "IPv4", 00:11:32.462 "traddr": "10.0.0.2", 00:11:32.462 "trsvcid": "4420" 00:11:32.462 }, 00:11:32.462 "peer_address": { 00:11:32.462 "trtype": "TCP", 00:11:32.462 "adrfam": "IPv4", 00:11:32.462 "traddr": "10.0.0.1", 00:11:32.462 "trsvcid": "33248" 00:11:32.462 }, 00:11:32.462 "auth": { 00:11:32.462 "state": "completed", 00:11:32.462 "digest": "sha512", 00:11:32.462 "dhgroup": "ffdhe3072" 00:11:32.462 } 00:11:32.462 } 00:11:32.462 ]' 00:11:32.462 21:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:32.462 21:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:32.462 21:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:32.462 21:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:32.462 21:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:32.462 21:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:32.462 21:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:32.462 21:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:32.721 21:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --hostid 987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-secret DHHC-1:03:Y2I1NTUzYjEzODhhN2NlZGEyZjg4YjBlNzg2YjNmY2JlYjFmYzUyMmE2M2NjYTI4NWQzZjlhYTM5YWM4YWE2Y5RqSzs=: 00:11:33.288 21:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:33.288 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:33.288 21:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:11:33.288 21:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.288 21:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:33.288 21:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.288 21:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:33.288 21:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:33.288 21:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:11:33.288 21:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:11:33.546 21:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:11:33.547 21:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:33.547 21:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:33.547 21:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:11:33.547 21:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:33.547 21:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:33.547 21:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:33.547 21:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.547 21:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:33.547 21:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.547 21:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:33.547 21:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:33.805 00:11:33.805 21:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:33.805 21:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:33.805 21:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:34.064 21:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:34.064 21:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:34.064 21:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.064 21:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:34.064 21:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.064 21:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:34.064 { 00:11:34.064 "cntlid": 121, 00:11:34.064 "qid": 0, 00:11:34.064 "state": "enabled", 00:11:34.064 "thread": "nvmf_tgt_poll_group_000", 00:11:34.064 "listen_address": { 00:11:34.064 "trtype": "TCP", 00:11:34.064 "adrfam": "IPv4", 00:11:34.064 "traddr": "10.0.0.2", 00:11:34.064 "trsvcid": "4420" 00:11:34.064 }, 00:11:34.064 "peer_address": { 00:11:34.064 "trtype": "TCP", 00:11:34.064 "adrfam": "IPv4", 00:11:34.064 "traddr": "10.0.0.1", 00:11:34.064 "trsvcid": "33268" 00:11:34.064 }, 00:11:34.064 "auth": { 00:11:34.064 "state": "completed", 00:11:34.064 "digest": "sha512", 00:11:34.064 "dhgroup": "ffdhe4096" 00:11:34.064 } 00:11:34.064 } 00:11:34.064 ]' 00:11:34.064 21:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:34.064 21:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:34.064 21:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:34.064 21:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:34.064 21:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:34.323 21:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:34.323 21:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:34.323 21:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:34.582 21:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --hostid 987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-secret DHHC-1:00:NjEzZmJkNTFlZjY1ZTkxMjY4MzQwZjBiZDliYzU3YjZmMTYzODg2NjUyMzZkNjY1uLsNhA==: --dhchap-ctrl-secret DHHC-1:03:MzI0YmYyNDYzMjgzZDA2ZDg5Mjc3NzkyYjVmYjUxNzQ3YjE0NDAyOWQ0NzkwNTVhMWNhY2MyOTRlZDg1NWI1Mr1ssro=: 00:11:35.149 21:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:35.149 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:35.149 21:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:11:35.149 21:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.149 21:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:35.149 21:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.149 21:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:35.149 21:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:11:35.149 21:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:11:35.149 21:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:11:35.149 21:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:35.149 21:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:35.149 21:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:11:35.149 21:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:35.149 21:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:35.149 21:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:35.149 21:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.149 21:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:35.149 21:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.149 21:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:35.149 21:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:35.407 00:11:35.407 21:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:35.407 21:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:35.407 21:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:35.664 21:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:35.664 21:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:35.664 21:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.664 21:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:35.664 21:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.664 21:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:35.664 { 00:11:35.664 "cntlid": 123, 00:11:35.664 "qid": 0, 00:11:35.664 "state": "enabled", 00:11:35.664 "thread": "nvmf_tgt_poll_group_000", 00:11:35.664 "listen_address": { 00:11:35.664 "trtype": "TCP", 00:11:35.665 "adrfam": "IPv4", 00:11:35.665 "traddr": "10.0.0.2", 00:11:35.665 "trsvcid": "4420" 00:11:35.665 }, 00:11:35.665 "peer_address": { 00:11:35.665 "trtype": "TCP", 00:11:35.665 "adrfam": "IPv4", 00:11:35.665 "traddr": "10.0.0.1", 00:11:35.665 "trsvcid": "58944" 00:11:35.665 }, 00:11:35.665 "auth": { 00:11:35.665 "state": "completed", 00:11:35.665 "digest": "sha512", 00:11:35.665 "dhgroup": "ffdhe4096" 00:11:35.665 } 00:11:35.665 } 00:11:35.665 ]' 00:11:35.665 21:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:35.665 21:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:35.665 21:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:35.665 21:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:35.665 21:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:35.922 21:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:35.922 21:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:35.922 21:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:35.922 21:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --hostid 987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-secret DHHC-1:01:ZmYyMzVmZjJjNzJkNDg5YTIzMzBiMTg1Y2FmZmIyOWQN2BnV: --dhchap-ctrl-secret DHHC-1:02:ZDI5YWU2MjQ5OTBkMTFiZGU0NzE5NWFmODYwNjlkNzNlMDcwZmRiNTBlZGE0NThi0FkQ4A==: 00:11:36.489 21:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:36.489 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:36.489 21:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:11:36.489 21:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.489 21:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:36.489 21:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.489 21:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:36.489 21:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:11:36.489 21:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:11:36.747 21:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:11:36.747 21:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:36.747 21:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:36.747 21:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:11:36.747 21:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:36.747 21:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:36.747 21:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:36.747 21:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.747 21:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:36.747 21:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.747 21:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:36.747 21:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:37.006 00:11:37.006 21:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:37.006 21:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:37.006 21:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:37.264 21:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:37.264 21:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:37.264 21:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.264 21:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:37.264 21:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.264 21:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:37.264 { 00:11:37.264 "cntlid": 125, 00:11:37.264 "qid": 0, 00:11:37.264 "state": "enabled", 00:11:37.264 "thread": "nvmf_tgt_poll_group_000", 00:11:37.264 "listen_address": { 00:11:37.264 "trtype": "TCP", 00:11:37.264 "adrfam": "IPv4", 00:11:37.264 "traddr": "10.0.0.2", 00:11:37.264 "trsvcid": "4420" 00:11:37.264 }, 00:11:37.264 "peer_address": { 00:11:37.264 "trtype": "TCP", 00:11:37.264 "adrfam": "IPv4", 00:11:37.264 "traddr": "10.0.0.1", 00:11:37.264 "trsvcid": "58978" 00:11:37.264 }, 00:11:37.264 "auth": { 00:11:37.264 "state": "completed", 00:11:37.264 "digest": "sha512", 00:11:37.264 "dhgroup": "ffdhe4096" 00:11:37.264 } 00:11:37.264 } 00:11:37.264 ]' 00:11:37.264 21:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:37.264 21:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:37.264 21:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:37.522 21:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:37.522 21:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:37.522 21:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:37.522 21:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:37.522 21:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:37.522 21:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --hostid 987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-secret DHHC-1:02:YzIzM2Q4ZGEyZDFiODIwMjQ2ZjYzNjg1YzNjOGJmNTk4NTE2MTk1NjQ4NGFmM2Jhaf7tFQ==: --dhchap-ctrl-secret DHHC-1:01:ODA2MGEzNTEwNzZlZmEyZjJiZWVjOTA1YzQxMTEwYWFgE6zS: 00:11:38.089 21:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:38.089 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:38.089 21:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:11:38.089 21:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.089 21:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:38.089 21:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.089 21:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:38.089 21:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:11:38.089 21:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:11:38.349 21:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:11:38.349 21:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:38.349 21:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:38.349 21:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:11:38.349 21:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:38.349 21:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:38.349 21:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-key key3 00:11:38.349 21:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.349 21:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:38.349 21:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.349 21:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:38.349 21:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:38.616 00:11:38.616 21:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:38.616 21:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:38.616 21:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:38.895 21:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:38.895 21:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:38.895 21:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.895 21:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:38.895 21:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.895 21:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:38.895 { 00:11:38.895 "cntlid": 127, 00:11:38.895 "qid": 0, 00:11:38.895 "state": "enabled", 00:11:38.895 "thread": "nvmf_tgt_poll_group_000", 00:11:38.895 "listen_address": { 00:11:38.895 "trtype": "TCP", 00:11:38.895 "adrfam": "IPv4", 00:11:38.895 "traddr": "10.0.0.2", 00:11:38.896 "trsvcid": "4420" 00:11:38.896 }, 00:11:38.896 "peer_address": { 00:11:38.896 "trtype": "TCP", 00:11:38.896 "adrfam": "IPv4", 00:11:38.896 "traddr": "10.0.0.1", 00:11:38.896 "trsvcid": "58996" 00:11:38.896 }, 00:11:38.896 "auth": { 00:11:38.896 "state": "completed", 00:11:38.896 "digest": "sha512", 00:11:38.896 "dhgroup": "ffdhe4096" 00:11:38.896 } 00:11:38.896 } 00:11:38.896 ]' 00:11:38.896 21:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:38.896 21:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:38.896 21:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:38.896 21:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:38.896 21:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:38.896 21:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:38.896 21:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:38.896 21:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:39.170 21:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --hostid 987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-secret DHHC-1:03:Y2I1NTUzYjEzODhhN2NlZGEyZjg4YjBlNzg2YjNmY2JlYjFmYzUyMmE2M2NjYTI4NWQzZjlhYTM5YWM4YWE2Y5RqSzs=: 00:11:39.737 21:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:39.737 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:39.737 21:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:11:39.737 21:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.737 21:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:39.737 21:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.737 21:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:39.737 21:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:39.737 21:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:11:39.737 21:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:11:39.995 21:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:11:39.996 21:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:39.996 21:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:39.996 21:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:39.996 21:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:39.996 21:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:39.996 21:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:39.996 21:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.996 21:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:39.996 21:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.996 21:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:39.996 21:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:40.254 00:11:40.254 21:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:40.254 21:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:40.254 21:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:40.512 21:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:40.512 21:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:40.512 21:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.512 21:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:40.512 21:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.512 21:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:40.512 { 00:11:40.512 "cntlid": 129, 00:11:40.512 "qid": 0, 00:11:40.512 "state": "enabled", 00:11:40.512 "thread": "nvmf_tgt_poll_group_000", 00:11:40.512 "listen_address": { 00:11:40.512 "trtype": "TCP", 00:11:40.512 "adrfam": "IPv4", 00:11:40.512 "traddr": "10.0.0.2", 00:11:40.512 "trsvcid": "4420" 00:11:40.512 }, 00:11:40.512 "peer_address": { 00:11:40.512 "trtype": "TCP", 00:11:40.512 "adrfam": "IPv4", 00:11:40.512 "traddr": "10.0.0.1", 00:11:40.512 "trsvcid": "59034" 00:11:40.512 }, 00:11:40.512 "auth": { 00:11:40.512 "state": "completed", 00:11:40.512 "digest": "sha512", 00:11:40.512 "dhgroup": "ffdhe6144" 00:11:40.512 } 00:11:40.512 } 00:11:40.512 ]' 00:11:40.512 21:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:40.512 21:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:40.512 21:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:40.512 21:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:40.512 21:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:40.771 21:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:40.771 21:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:40.771 21:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:40.771 21:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --hostid 987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-secret DHHC-1:00:NjEzZmJkNTFlZjY1ZTkxMjY4MzQwZjBiZDliYzU3YjZmMTYzODg2NjUyMzZkNjY1uLsNhA==: --dhchap-ctrl-secret DHHC-1:03:MzI0YmYyNDYzMjgzZDA2ZDg5Mjc3NzkyYjVmYjUxNzQ3YjE0NDAyOWQ0NzkwNTVhMWNhY2MyOTRlZDg1NWI1Mr1ssro=: 00:11:41.338 21:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:41.338 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:41.338 21:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:11:41.338 21:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.338 21:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:41.338 21:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.338 21:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:41.338 21:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:11:41.338 21:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:11:41.597 21:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:11:41.597 21:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:41.597 21:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:41.597 21:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:41.597 21:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:41.597 21:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:41.597 21:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:41.597 21:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.597 21:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:41.597 21:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.597 21:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:41.597 21:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:41.856 00:11:42.114 21:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:42.114 21:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:42.114 21:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:42.114 21:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:42.114 21:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:42.114 21:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.114 21:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:42.114 21:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.114 21:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:42.114 { 00:11:42.114 "cntlid": 131, 00:11:42.114 "qid": 0, 00:11:42.114 "state": "enabled", 00:11:42.114 "thread": "nvmf_tgt_poll_group_000", 00:11:42.114 "listen_address": { 00:11:42.114 "trtype": "TCP", 00:11:42.114 "adrfam": "IPv4", 00:11:42.114 "traddr": "10.0.0.2", 00:11:42.114 "trsvcid": "4420" 00:11:42.114 }, 00:11:42.114 "peer_address": { 00:11:42.114 "trtype": "TCP", 00:11:42.114 "adrfam": "IPv4", 00:11:42.114 "traddr": "10.0.0.1", 00:11:42.114 "trsvcid": "59052" 00:11:42.114 }, 00:11:42.114 "auth": { 00:11:42.114 "state": "completed", 00:11:42.114 "digest": "sha512", 00:11:42.114 "dhgroup": "ffdhe6144" 00:11:42.114 } 00:11:42.114 } 00:11:42.114 ]' 00:11:42.114 21:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:42.372 21:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:42.372 21:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:42.372 21:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:42.372 21:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:42.372 21:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:42.372 21:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:42.372 21:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:42.631 21:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --hostid 987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-secret DHHC-1:01:ZmYyMzVmZjJjNzJkNDg5YTIzMzBiMTg1Y2FmZmIyOWQN2BnV: --dhchap-ctrl-secret DHHC-1:02:ZDI5YWU2MjQ5OTBkMTFiZGU0NzE5NWFmODYwNjlkNzNlMDcwZmRiNTBlZGE0NThi0FkQ4A==: 00:11:43.198 21:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:43.198 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:43.198 21:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:11:43.198 21:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.198 21:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:43.198 21:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.198 21:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:43.198 21:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:11:43.198 21:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:11:43.198 21:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:11:43.198 21:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:43.198 21:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:43.198 21:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:43.198 21:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:43.198 21:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:43.198 21:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:43.198 21:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.198 21:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:43.198 21:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.198 21:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:43.198 21:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:43.767 00:11:43.767 21:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:43.767 21:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:43.767 21:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:43.767 21:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:43.767 21:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:43.767 21:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.767 21:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:43.767 21:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.767 21:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:43.767 { 00:11:43.767 "cntlid": 133, 00:11:43.767 "qid": 0, 00:11:43.767 "state": "enabled", 00:11:43.767 "thread": "nvmf_tgt_poll_group_000", 00:11:43.767 "listen_address": { 00:11:43.767 "trtype": "TCP", 00:11:43.767 "adrfam": "IPv4", 00:11:43.767 "traddr": "10.0.0.2", 00:11:43.767 "trsvcid": "4420" 00:11:43.767 }, 00:11:43.767 "peer_address": { 00:11:43.767 "trtype": "TCP", 00:11:43.767 "adrfam": "IPv4", 00:11:43.767 "traddr": "10.0.0.1", 00:11:43.767 "trsvcid": "59068" 00:11:43.767 }, 00:11:43.767 "auth": { 00:11:43.767 "state": "completed", 00:11:43.767 "digest": "sha512", 00:11:43.767 "dhgroup": "ffdhe6144" 00:11:43.767 } 00:11:43.767 } 00:11:43.767 ]' 00:11:43.767 21:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:44.026 21:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:44.026 21:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:44.026 21:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:44.026 21:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:44.026 21:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:44.026 21:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:44.026 21:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:44.284 21:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --hostid 987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-secret DHHC-1:02:YzIzM2Q4ZGEyZDFiODIwMjQ2ZjYzNjg1YzNjOGJmNTk4NTE2MTk1NjQ4NGFmM2Jhaf7tFQ==: --dhchap-ctrl-secret DHHC-1:01:ODA2MGEzNTEwNzZlZmEyZjJiZWVjOTA1YzQxMTEwYWFgE6zS: 00:11:44.851 21:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:44.851 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:44.851 21:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:11:44.851 21:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.851 21:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:44.851 21:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.851 21:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:44.851 21:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:11:44.851 21:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:11:45.111 21:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:11:45.111 21:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:45.111 21:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:45.111 21:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:45.111 21:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:45.111 21:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:45.111 21:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-key key3 00:11:45.111 21:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.111 21:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:45.111 21:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.111 21:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:45.111 21:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:45.679 00:11:45.679 21:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:45.679 21:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:45.679 21:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:45.679 21:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:45.679 21:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:45.679 21:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.679 21:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:45.938 21:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.938 21:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:45.938 { 00:11:45.938 "cntlid": 135, 00:11:45.938 "qid": 0, 00:11:45.938 "state": "enabled", 00:11:45.938 "thread": "nvmf_tgt_poll_group_000", 00:11:45.938 "listen_address": { 00:11:45.938 "trtype": "TCP", 00:11:45.938 "adrfam": "IPv4", 00:11:45.938 "traddr": "10.0.0.2", 00:11:45.938 "trsvcid": "4420" 00:11:45.938 }, 00:11:45.938 "peer_address": { 00:11:45.938 "trtype": "TCP", 00:11:45.938 "adrfam": "IPv4", 00:11:45.938 "traddr": "10.0.0.1", 00:11:45.938 "trsvcid": "45508" 00:11:45.938 }, 00:11:45.938 "auth": { 00:11:45.938 "state": "completed", 00:11:45.938 "digest": "sha512", 00:11:45.938 "dhgroup": "ffdhe6144" 00:11:45.938 } 00:11:45.938 } 00:11:45.938 ]' 00:11:45.938 21:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:45.938 21:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:45.938 21:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:45.938 21:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:45.938 21:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:45.938 21:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:45.938 21:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:45.938 21:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:46.197 21:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --hostid 987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-secret DHHC-1:03:Y2I1NTUzYjEzODhhN2NlZGEyZjg4YjBlNzg2YjNmY2JlYjFmYzUyMmE2M2NjYTI4NWQzZjlhYTM5YWM4YWE2Y5RqSzs=: 00:11:46.765 21:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:46.765 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:46.765 21:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:11:46.765 21:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.765 21:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:46.765 21:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.765 21:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:46.765 21:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:46.765 21:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:11:46.765 21:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:11:47.024 21:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:11:47.024 21:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:47.024 21:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:47.024 21:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:47.024 21:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:47.024 21:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:47.024 21:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:47.024 21:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.024 21:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:47.024 21:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.024 21:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:47.024 21:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:47.591 00:11:47.591 21:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:47.591 21:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:47.591 21:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:47.852 21:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:47.852 21:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:47.852 21:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.852 21:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:47.852 21:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.852 21:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:47.852 { 00:11:47.852 "cntlid": 137, 00:11:47.852 "qid": 0, 00:11:47.852 "state": "enabled", 00:11:47.852 "thread": "nvmf_tgt_poll_group_000", 00:11:47.852 "listen_address": { 00:11:47.852 "trtype": "TCP", 00:11:47.852 "adrfam": "IPv4", 00:11:47.852 "traddr": "10.0.0.2", 00:11:47.852 "trsvcid": "4420" 00:11:47.852 }, 00:11:47.852 "peer_address": { 00:11:47.852 "trtype": "TCP", 00:11:47.852 "adrfam": "IPv4", 00:11:47.852 "traddr": "10.0.0.1", 00:11:47.852 "trsvcid": "45526" 00:11:47.852 }, 00:11:47.852 "auth": { 00:11:47.852 "state": "completed", 00:11:47.852 "digest": "sha512", 00:11:47.852 "dhgroup": "ffdhe8192" 00:11:47.852 } 00:11:47.852 } 00:11:47.852 ]' 00:11:47.852 21:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:47.852 21:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:47.852 21:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:47.852 21:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:47.852 21:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:47.852 21:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:47.852 21:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:47.852 21:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:48.111 21:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --hostid 987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-secret DHHC-1:00:NjEzZmJkNTFlZjY1ZTkxMjY4MzQwZjBiZDliYzU3YjZmMTYzODg2NjUyMzZkNjY1uLsNhA==: --dhchap-ctrl-secret DHHC-1:03:MzI0YmYyNDYzMjgzZDA2ZDg5Mjc3NzkyYjVmYjUxNzQ3YjE0NDAyOWQ0NzkwNTVhMWNhY2MyOTRlZDg1NWI1Mr1ssro=: 00:11:48.678 21:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:48.678 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:48.678 21:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:11:48.678 21:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.678 21:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:48.678 21:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.678 21:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:48.678 21:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:11:48.678 21:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:11:48.935 21:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:11:48.935 21:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:48.935 21:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:48.935 21:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:48.935 21:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:48.936 21:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:48.936 21:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:48.936 21:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.936 21:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:48.936 21:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.936 21:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:48.936 21:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:49.501 00:11:49.501 21:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:49.501 21:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:49.501 21:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:49.759 21:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:49.759 21:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:49.759 21:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.759 21:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:49.759 21:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.759 21:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:49.759 { 00:11:49.759 "cntlid": 139, 00:11:49.759 "qid": 0, 00:11:49.759 "state": "enabled", 00:11:49.759 "thread": "nvmf_tgt_poll_group_000", 00:11:49.759 "listen_address": { 00:11:49.759 "trtype": "TCP", 00:11:49.759 "adrfam": "IPv4", 00:11:49.759 "traddr": "10.0.0.2", 00:11:49.759 "trsvcid": "4420" 00:11:49.759 }, 00:11:49.759 "peer_address": { 00:11:49.759 "trtype": "TCP", 00:11:49.759 "adrfam": "IPv4", 00:11:49.759 "traddr": "10.0.0.1", 00:11:49.759 "trsvcid": "45556" 00:11:49.759 }, 00:11:49.759 "auth": { 00:11:49.759 "state": "completed", 00:11:49.759 "digest": "sha512", 00:11:49.759 "dhgroup": "ffdhe8192" 00:11:49.759 } 00:11:49.759 } 00:11:49.759 ]' 00:11:49.759 21:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:49.759 21:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:49.759 21:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:49.759 21:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:49.759 21:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:49.759 21:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:49.759 21:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:49.759 21:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:50.017 21:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --hostid 987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-secret DHHC-1:01:ZmYyMzVmZjJjNzJkNDg5YTIzMzBiMTg1Y2FmZmIyOWQN2BnV: --dhchap-ctrl-secret DHHC-1:02:ZDI5YWU2MjQ5OTBkMTFiZGU0NzE5NWFmODYwNjlkNzNlMDcwZmRiNTBlZGE0NThi0FkQ4A==: 00:11:50.585 21:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:50.585 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:50.585 21:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:11:50.585 21:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.585 21:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:50.585 21:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.585 21:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:50.585 21:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:11:50.585 21:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:11:50.843 21:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:11:50.843 21:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:50.843 21:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:50.843 21:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:50.844 21:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:50.844 21:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:50.844 21:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:50.844 21:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.844 21:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:50.844 21:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.844 21:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:50.844 21:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:51.411 00:11:51.411 21:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:51.411 21:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:51.411 21:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:51.670 21:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:51.670 21:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:51.670 21:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.670 21:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:51.670 21:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.670 21:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:51.670 { 00:11:51.670 "cntlid": 141, 00:11:51.670 "qid": 0, 00:11:51.670 "state": "enabled", 00:11:51.670 "thread": "nvmf_tgt_poll_group_000", 00:11:51.670 "listen_address": { 00:11:51.670 "trtype": "TCP", 00:11:51.670 "adrfam": "IPv4", 00:11:51.670 "traddr": "10.0.0.2", 00:11:51.670 "trsvcid": "4420" 00:11:51.670 }, 00:11:51.670 "peer_address": { 00:11:51.670 "trtype": "TCP", 00:11:51.670 "adrfam": "IPv4", 00:11:51.670 "traddr": "10.0.0.1", 00:11:51.670 "trsvcid": "45594" 00:11:51.670 }, 00:11:51.670 "auth": { 00:11:51.670 "state": "completed", 00:11:51.670 "digest": "sha512", 00:11:51.670 "dhgroup": "ffdhe8192" 00:11:51.670 } 00:11:51.670 } 00:11:51.670 ]' 00:11:51.670 21:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:51.670 21:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:51.670 21:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:51.670 21:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:51.670 21:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:51.670 21:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:51.670 21:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:51.670 21:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:51.929 21:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --hostid 987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-secret DHHC-1:02:YzIzM2Q4ZGEyZDFiODIwMjQ2ZjYzNjg1YzNjOGJmNTk4NTE2MTk1NjQ4NGFmM2Jhaf7tFQ==: --dhchap-ctrl-secret DHHC-1:01:ODA2MGEzNTEwNzZlZmEyZjJiZWVjOTA1YzQxMTEwYWFgE6zS: 00:11:52.496 21:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:52.496 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:52.496 21:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:11:52.496 21:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.496 21:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:52.496 21:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.496 21:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:52.496 21:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:11:52.496 21:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:11:52.496 21:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:11:52.496 21:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:52.496 21:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:52.496 21:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:52.496 21:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:52.496 21:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:52.496 21:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-key key3 00:11:52.496 21:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.496 21:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:52.756 21:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.756 21:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:52.756 21:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:53.015 00:11:53.015 21:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:53.015 21:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:53.015 21:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:53.274 21:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:53.274 21:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:53.274 21:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.274 21:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:53.274 21:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.274 21:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:53.274 { 00:11:53.274 "cntlid": 143, 00:11:53.274 "qid": 0, 00:11:53.274 "state": "enabled", 00:11:53.274 "thread": "nvmf_tgt_poll_group_000", 00:11:53.274 "listen_address": { 00:11:53.274 "trtype": "TCP", 00:11:53.274 "adrfam": "IPv4", 00:11:53.274 "traddr": "10.0.0.2", 00:11:53.274 "trsvcid": "4420" 00:11:53.274 }, 00:11:53.274 "peer_address": { 00:11:53.274 "trtype": "TCP", 00:11:53.274 "adrfam": "IPv4", 00:11:53.274 "traddr": "10.0.0.1", 00:11:53.274 "trsvcid": "45606" 00:11:53.274 }, 00:11:53.274 "auth": { 00:11:53.274 "state": "completed", 00:11:53.274 "digest": "sha512", 00:11:53.274 "dhgroup": "ffdhe8192" 00:11:53.274 } 00:11:53.274 } 00:11:53.274 ]' 00:11:53.274 21:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:53.274 21:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:53.274 21:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:53.532 21:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:53.532 21:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:53.532 21:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:53.532 21:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:53.532 21:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:53.791 21:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --hostid 987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-secret DHHC-1:03:Y2I1NTUzYjEzODhhN2NlZGEyZjg4YjBlNzg2YjNmY2JlYjFmYzUyMmE2M2NjYTI4NWQzZjlhYTM5YWM4YWE2Y5RqSzs=: 00:11:54.050 21:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:54.050 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:54.050 21:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:11:54.050 21:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.050 21:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:54.309 21:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.309 21:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:11:54.309 21:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:11:54.309 21:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:11:54.309 21:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:11:54.309 21:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:11:54.309 21:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:11:54.568 21:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:11:54.568 21:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:54.568 21:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:54.568 21:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:54.568 21:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:54.568 21:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:54.568 21:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:54.568 21:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.568 21:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:54.568 21:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.568 21:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:54.568 21:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:55.134 00:11:55.134 21:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:55.134 21:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:55.134 21:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:55.134 21:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:55.134 21:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:55.134 21:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.134 21:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:55.134 21:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.134 21:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:55.134 { 00:11:55.134 "cntlid": 145, 00:11:55.134 "qid": 0, 00:11:55.134 "state": "enabled", 00:11:55.134 "thread": "nvmf_tgt_poll_group_000", 00:11:55.134 "listen_address": { 00:11:55.134 "trtype": "TCP", 00:11:55.134 "adrfam": "IPv4", 00:11:55.134 "traddr": "10.0.0.2", 00:11:55.134 "trsvcid": "4420" 00:11:55.134 }, 00:11:55.134 "peer_address": { 00:11:55.134 "trtype": "TCP", 00:11:55.134 "adrfam": "IPv4", 00:11:55.134 "traddr": "10.0.0.1", 00:11:55.134 "trsvcid": "45648" 00:11:55.134 }, 00:11:55.134 "auth": { 00:11:55.134 "state": "completed", 00:11:55.134 "digest": "sha512", 00:11:55.134 "dhgroup": "ffdhe8192" 00:11:55.134 } 00:11:55.134 } 00:11:55.134 ]' 00:11:55.134 21:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:55.392 21:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:55.392 21:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:55.392 21:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:55.392 21:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:55.392 21:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:55.392 21:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:55.392 21:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:55.650 21:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --hostid 987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-secret DHHC-1:00:NjEzZmJkNTFlZjY1ZTkxMjY4MzQwZjBiZDliYzU3YjZmMTYzODg2NjUyMzZkNjY1uLsNhA==: --dhchap-ctrl-secret DHHC-1:03:MzI0YmYyNDYzMjgzZDA2ZDg5Mjc3NzkyYjVmYjUxNzQ3YjE0NDAyOWQ0NzkwNTVhMWNhY2MyOTRlZDg1NWI1Mr1ssro=: 00:11:56.216 21:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:56.216 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:56.216 21:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:11:56.216 21:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.216 21:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:56.216 21:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.216 21:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-key key1 00:11:56.216 21:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.216 21:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:56.216 21:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.216 21:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:11:56.216 21:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:11:56.216 21:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:11:56.216 21:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:11:56.216 21:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:56.216 21:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:11:56.216 21:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:56.217 21:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:11:56.217 21:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:11:56.475 request: 00:11:56.475 { 00:11:56.475 "name": "nvme0", 00:11:56.475 "trtype": "tcp", 00:11:56.475 "traddr": "10.0.0.2", 00:11:56.475 "adrfam": "ipv4", 00:11:56.475 "trsvcid": "4420", 00:11:56.475 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:11:56.475 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158", 00:11:56.475 "prchk_reftag": false, 00:11:56.475 "prchk_guard": false, 00:11:56.475 "hdgst": false, 00:11:56.475 "ddgst": false, 00:11:56.475 "dhchap_key": "key2", 00:11:56.475 "method": "bdev_nvme_attach_controller", 00:11:56.475 "req_id": 1 00:11:56.475 } 00:11:56.475 Got JSON-RPC error response 00:11:56.475 response: 00:11:56.475 { 00:11:56.475 "code": -5, 00:11:56.475 "message": "Input/output error" 00:11:56.475 } 00:11:56.475 21:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:11:56.475 21:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:56.475 21:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:56.475 21:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:56.475 21:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:11:56.475 21:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.475 21:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:56.475 21:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.475 21:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:56.476 21:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.476 21:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:56.476 21:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.476 21:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:11:56.476 21:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:11:56.476 21:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:11:56.476 21:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:11:56.476 21:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:56.476 21:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:11:56.476 21:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:56.476 21:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:11:56.476 21:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:11:57.042 request: 00:11:57.042 { 00:11:57.042 "name": "nvme0", 00:11:57.042 "trtype": "tcp", 00:11:57.042 "traddr": "10.0.0.2", 00:11:57.042 "adrfam": "ipv4", 00:11:57.042 "trsvcid": "4420", 00:11:57.042 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:11:57.042 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158", 00:11:57.042 "prchk_reftag": false, 00:11:57.042 "prchk_guard": false, 00:11:57.042 "hdgst": false, 00:11:57.042 "ddgst": false, 00:11:57.042 "dhchap_key": "key1", 00:11:57.042 "dhchap_ctrlr_key": "ckey2", 00:11:57.042 "method": "bdev_nvme_attach_controller", 00:11:57.042 "req_id": 1 00:11:57.042 } 00:11:57.042 Got JSON-RPC error response 00:11:57.042 response: 00:11:57.042 { 00:11:57.042 "code": -5, 00:11:57.042 "message": "Input/output error" 00:11:57.042 } 00:11:57.042 21:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:11:57.042 21:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:57.042 21:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:57.042 21:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:57.042 21:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:11:57.042 21:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.042 21:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:57.042 21:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.042 21:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-key key1 00:11:57.042 21:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.042 21:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:57.042 21:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.042 21:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:57.042 21:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:11:57.042 21:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:57.042 21:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:11:57.042 21:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:57.042 21:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:11:57.042 21:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:57.042 21:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:57.043 21:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:57.609 request: 00:11:57.609 { 00:11:57.609 "name": "nvme0", 00:11:57.609 "trtype": "tcp", 00:11:57.609 "traddr": "10.0.0.2", 00:11:57.609 "adrfam": "ipv4", 00:11:57.609 "trsvcid": "4420", 00:11:57.609 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:11:57.609 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158", 00:11:57.609 "prchk_reftag": false, 00:11:57.609 "prchk_guard": false, 00:11:57.609 "hdgst": false, 00:11:57.609 "ddgst": false, 00:11:57.609 "dhchap_key": "key1", 00:11:57.609 "dhchap_ctrlr_key": "ckey1", 00:11:57.609 "method": "bdev_nvme_attach_controller", 00:11:57.609 "req_id": 1 00:11:57.609 } 00:11:57.609 Got JSON-RPC error response 00:11:57.609 response: 00:11:57.609 { 00:11:57.609 "code": -5, 00:11:57.609 "message": "Input/output error" 00:11:57.609 } 00:11:57.609 21:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:11:57.609 21:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:57.609 21:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:57.609 21:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:57.609 21:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:11:57.609 21:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.609 21:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:57.609 21:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.609 21:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 68538 00:11:57.609 21:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 68538 ']' 00:11:57.609 21:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 68538 00:11:57.609 21:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:11:57.609 21:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:57.609 21:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 68538 00:11:57.609 21:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:57.609 killing process with pid 68538 00:11:57.609 21:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:57.609 21:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 68538' 00:11:57.609 21:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 68538 00:11:57.610 21:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 68538 00:11:57.868 21:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:11:57.868 21:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:57.868 21:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:57.868 21:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:57.868 21:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=71311 00:11:57.868 21:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 71311 00:11:57.868 21:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 71311 ']' 00:11:57.868 21:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:57.868 21:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:57.868 21:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:57.868 21:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:57.868 21:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:57.868 21:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:11:58.803 21:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:58.803 21:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:11:58.803 21:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:58.803 21:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:58.803 21:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:58.803 21:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:58.803 21:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:11:58.803 21:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 71311 00:11:58.803 21:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 71311 ']' 00:11:58.803 21:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:58.803 21:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:58.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:58.803 21:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:58.803 21:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:58.803 21:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:59.061 21:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:59.061 21:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:11:59.061 21:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:11:59.061 21:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.061 21:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:59.319 21:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.319 21:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:11:59.319 21:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:59.319 21:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:59.319 21:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:59.319 21:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:59.319 21:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:59.319 21:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-key key3 00:11:59.319 21:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.319 21:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:59.319 21:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.319 21:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:59.319 21:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:59.886 00:11:59.886 21:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:59.886 21:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:59.886 21:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:59.886 21:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:59.886 21:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:59.886 21:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.886 21:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:59.886 21:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.886 21:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:59.886 { 00:11:59.886 "cntlid": 1, 00:11:59.886 "qid": 0, 00:11:59.886 "state": "enabled", 00:11:59.886 "thread": "nvmf_tgt_poll_group_000", 00:11:59.886 "listen_address": { 00:11:59.886 "trtype": "TCP", 00:11:59.886 "adrfam": "IPv4", 00:11:59.886 "traddr": "10.0.0.2", 00:11:59.886 "trsvcid": "4420" 00:11:59.886 }, 00:11:59.886 "peer_address": { 00:11:59.886 "trtype": "TCP", 00:11:59.886 "adrfam": "IPv4", 00:11:59.886 "traddr": "10.0.0.1", 00:11:59.886 "trsvcid": "39630" 00:11:59.886 }, 00:11:59.886 "auth": { 00:11:59.886 "state": "completed", 00:11:59.886 "digest": "sha512", 00:11:59.886 "dhgroup": "ffdhe8192" 00:11:59.886 } 00:11:59.886 } 00:11:59.886 ]' 00:11:59.886 21:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:59.886 21:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:59.886 21:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:00.144 21:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:00.144 21:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:00.144 21:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:00.144 21:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:00.144 21:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:00.403 21:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --hostid 987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-secret DHHC-1:03:Y2I1NTUzYjEzODhhN2NlZGEyZjg4YjBlNzg2YjNmY2JlYjFmYzUyMmE2M2NjYTI4NWQzZjlhYTM5YWM4YWE2Y5RqSzs=: 00:12:00.970 21:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:00.970 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:00.970 21:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:12:00.970 21:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.970 21:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:00.970 21:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.970 21:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --dhchap-key key3 00:12:00.970 21:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.970 21:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:00.970 21:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.970 21:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:12:00.970 21:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:12:01.229 21:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:01.229 21:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:12:01.230 21:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:01.230 21:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:12:01.230 21:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:01.230 21:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:12:01.230 21:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:01.230 21:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:01.230 21:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:01.488 request: 00:12:01.488 { 00:12:01.488 "name": "nvme0", 00:12:01.488 "trtype": "tcp", 00:12:01.488 "traddr": "10.0.0.2", 00:12:01.488 "adrfam": "ipv4", 00:12:01.488 "trsvcid": "4420", 00:12:01.488 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:01.488 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158", 00:12:01.488 "prchk_reftag": false, 00:12:01.489 "prchk_guard": false, 00:12:01.489 "hdgst": false, 00:12:01.489 "ddgst": false, 00:12:01.489 "dhchap_key": "key3", 00:12:01.489 "method": "bdev_nvme_attach_controller", 00:12:01.489 "req_id": 1 00:12:01.489 } 00:12:01.489 Got JSON-RPC error response 00:12:01.489 response: 00:12:01.489 { 00:12:01.489 "code": -5, 00:12:01.489 "message": "Input/output error" 00:12:01.489 } 00:12:01.489 21:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:12:01.489 21:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:01.489 21:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:01.489 21:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:01.489 21:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:12:01.489 21:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:12:01.489 21:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:12:01.489 21:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:12:01.748 21:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:01.748 21:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:12:01.748 21:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:01.748 21:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:12:01.748 21:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:01.748 21:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:12:01.748 21:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:01.748 21:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:01.748 21:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:01.748 request: 00:12:01.748 { 00:12:01.748 "name": "nvme0", 00:12:01.748 "trtype": "tcp", 00:12:01.748 "traddr": "10.0.0.2", 00:12:01.748 "adrfam": "ipv4", 00:12:01.748 "trsvcid": "4420", 00:12:01.748 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:01.748 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158", 00:12:01.748 "prchk_reftag": false, 00:12:01.748 "prchk_guard": false, 00:12:01.748 "hdgst": false, 00:12:01.748 "ddgst": false, 00:12:01.748 "dhchap_key": "key3", 00:12:01.748 "method": "bdev_nvme_attach_controller", 00:12:01.748 "req_id": 1 00:12:01.748 } 00:12:01.748 Got JSON-RPC error response 00:12:01.748 response: 00:12:01.748 { 00:12:01.748 "code": -5, 00:12:01.748 "message": "Input/output error" 00:12:01.748 } 00:12:02.007 21:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:12:02.007 21:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:02.007 21:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:02.007 21:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:02.007 21:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:12:02.007 21:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:12:02.007 21:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:12:02.007 21:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:02.007 21:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:02.007 21:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:02.007 21:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:12:02.007 21:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.007 21:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.007 21:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.007 21:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:12:02.007 21:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.007 21:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.007 21:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.007 21:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:02.007 21:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:12:02.007 21:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:02.007 21:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:12:02.007 21:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:02.007 21:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:12:02.007 21:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:02.007 21:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:02.007 21:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:02.265 request: 00:12:02.265 { 00:12:02.265 "name": "nvme0", 00:12:02.265 "trtype": "tcp", 00:12:02.265 "traddr": "10.0.0.2", 00:12:02.266 "adrfam": "ipv4", 00:12:02.266 "trsvcid": "4420", 00:12:02.266 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:02.266 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158", 00:12:02.266 "prchk_reftag": false, 00:12:02.266 "prchk_guard": false, 00:12:02.266 "hdgst": false, 00:12:02.266 "ddgst": false, 00:12:02.266 "dhchap_key": "key0", 00:12:02.266 "dhchap_ctrlr_key": "key1", 00:12:02.266 "method": "bdev_nvme_attach_controller", 00:12:02.266 "req_id": 1 00:12:02.266 } 00:12:02.266 Got JSON-RPC error response 00:12:02.266 response: 00:12:02.266 { 00:12:02.266 "code": -5, 00:12:02.266 "message": "Input/output error" 00:12:02.266 } 00:12:02.266 21:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:12:02.266 21:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:02.266 21:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:02.266 21:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:02.266 21:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:12:02.266 21:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:12:02.540 00:12:02.540 21:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:12:02.540 21:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:02.540 21:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:12:02.837 21:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:02.837 21:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:02.837 21:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:03.102 21:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:12:03.102 21:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:12:03.102 21:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 68570 00:12:03.102 21:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 68570 ']' 00:12:03.102 21:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 68570 00:12:03.102 21:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:12:03.102 21:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:03.102 21:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 68570 00:12:03.102 killing process with pid 68570 00:12:03.102 21:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:12:03.102 21:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:12:03.102 21:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 68570' 00:12:03.102 21:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 68570 00:12:03.102 21:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 68570 00:12:03.670 21:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:12:03.670 21:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:03.670 21:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:12:03.670 21:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:03.670 21:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:12:03.671 21:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:03.671 21:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:03.671 rmmod nvme_tcp 00:12:03.671 rmmod nvme_fabrics 00:12:03.671 rmmod nvme_keyring 00:12:03.671 21:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:03.671 21:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:12:03.671 21:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:12:03.671 21:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 71311 ']' 00:12:03.671 21:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 71311 00:12:03.671 21:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 71311 ']' 00:12:03.671 21:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 71311 00:12:03.671 21:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:12:03.671 21:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:03.671 21:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71311 00:12:03.671 killing process with pid 71311 00:12:03.671 21:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:03.671 21:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:03.671 21:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71311' 00:12:03.671 21:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 71311 00:12:03.671 21:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 71311 00:12:03.930 21:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:03.930 21:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:03.930 21:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:03.930 21:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:03.930 21:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:03.930 21:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:03.930 21:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:03.930 21:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:03.930 21:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:12:03.930 21:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.T5w /tmp/spdk.key-sha256.4RS /tmp/spdk.key-sha384.DjO /tmp/spdk.key-sha512.n2B /tmp/spdk.key-sha512.vt5 /tmp/spdk.key-sha384.LM7 /tmp/spdk.key-sha256.Vrs '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:12:03.930 00:12:03.930 real 2m22.237s 00:12:03.930 user 5m40.640s 00:12:03.930 sys 0m22.233s 00:12:03.930 21:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:03.930 21:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:03.930 ************************************ 00:12:03.930 END TEST nvmf_auth_target 00:12:03.930 ************************************ 00:12:03.930 21:33:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:12:03.930 21:33:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:12:03.930 21:33:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:03.930 21:33:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:03.930 21:33:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:03.930 ************************************ 00:12:03.930 START TEST nvmf_bdevio_no_huge 00:12:03.930 ************************************ 00:12:03.930 21:33:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:12:04.190 * Looking for test storage... 00:12:04.190 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:04.190 21:33:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:04.190 21:33:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:12:04.190 21:33:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:04.190 21:33:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:04.190 21:33:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:04.190 21:33:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:04.190 21:33:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:04.190 21:33:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:04.190 21:33:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:04.190 21:33:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:04.190 21:33:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:04.190 21:33:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:04.190 21:33:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:12:04.190 21:33:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:12:04.190 21:33:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:04.190 21:33:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:04.190 21:33:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:04.190 21:33:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:04.190 21:33:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:04.190 21:33:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:04.190 21:33:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:04.190 21:33:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:04.190 21:33:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.190 21:33:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.190 21:33:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.190 21:33:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:12:04.190 21:33:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.190 21:33:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:12:04.190 21:33:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:04.190 21:33:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:04.190 21:33:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:04.190 21:33:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:04.190 21:33:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:04.190 21:33:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:04.190 21:33:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:04.190 21:33:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:04.190 21:33:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:04.190 21:33:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:04.190 21:33:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:12:04.190 21:33:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:04.190 21:33:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:04.190 21:33:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:04.190 21:33:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:04.190 21:33:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:04.190 21:33:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:04.190 21:33:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:04.190 21:33:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:04.190 21:33:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:12:04.190 21:33:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:12:04.190 21:33:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:12:04.190 21:33:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:12:04.190 21:33:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:12:04.190 21:33:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # nvmf_veth_init 00:12:04.190 21:33:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:04.190 21:33:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:04.191 21:33:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:04.191 21:33:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:04.191 21:33:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:04.191 21:33:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:04.191 21:33:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:04.191 21:33:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:04.191 21:33:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:04.191 21:33:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:04.191 21:33:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:04.191 21:33:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:04.191 21:33:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:04.191 21:33:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:04.191 Cannot find device "nvmf_tgt_br" 00:12:04.191 21:33:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # true 00:12:04.191 21:33:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:04.191 Cannot find device "nvmf_tgt_br2" 00:12:04.191 21:33:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # true 00:12:04.191 21:33:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:04.191 21:33:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:04.191 Cannot find device "nvmf_tgt_br" 00:12:04.191 21:33:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # true 00:12:04.191 21:33:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:04.191 Cannot find device "nvmf_tgt_br2" 00:12:04.191 21:33:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # true 00:12:04.191 21:33:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:04.191 21:33:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:04.191 21:33:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:04.191 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:04.191 21:33:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:12:04.191 21:33:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:04.191 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:04.191 21:33:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:12:04.191 21:33:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:04.191 21:33:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:04.191 21:33:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:04.191 21:33:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:04.191 21:33:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:04.191 21:33:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:04.191 21:33:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:04.191 21:33:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:04.191 21:33:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:04.449 21:33:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:04.449 21:33:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:04.449 21:33:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:04.449 21:33:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:04.449 21:33:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:04.449 21:33:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:04.449 21:33:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:04.449 21:33:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:04.449 21:33:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:04.449 21:33:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:04.449 21:33:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:04.449 21:33:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:04.450 21:33:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:04.450 21:33:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:04.450 21:33:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:04.450 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:04.450 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:12:04.450 00:12:04.450 --- 10.0.0.2 ping statistics --- 00:12:04.450 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:04.450 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:12:04.450 21:33:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:04.450 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:04.450 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:12:04.450 00:12:04.450 --- 10.0.0.3 ping statistics --- 00:12:04.450 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:04.450 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:12:04.450 21:33:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:04.450 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:04.450 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 00:12:04.450 00:12:04.450 --- 10.0.0.1 ping statistics --- 00:12:04.450 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:04.450 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:12:04.450 21:33:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:04.450 21:33:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@433 -- # return 0 00:12:04.450 21:33:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:04.450 21:33:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:04.450 21:33:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:04.450 21:33:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:04.450 21:33:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:04.450 21:33:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:04.450 21:33:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:04.450 21:33:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:12:04.450 21:33:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:04.450 21:33:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:04.450 21:33:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:04.450 21:33:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=71620 00:12:04.450 21:33:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:12:04.450 21:33:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 71620 00:12:04.450 21:33:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # '[' -z 71620 ']' 00:12:04.450 21:33:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:04.450 21:33:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:04.450 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:04.450 21:33:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:04.450 21:33:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:04.450 21:33:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:04.450 [2024-07-24 21:33:49.384455] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:12:04.450 [2024-07-24 21:33:49.384767] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:12:04.708 [2024-07-24 21:33:49.535209] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:04.708 [2024-07-24 21:33:49.636882] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:04.708 [2024-07-24 21:33:49.636935] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:04.709 [2024-07-24 21:33:49.636945] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:04.709 [2024-07-24 21:33:49.636952] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:04.709 [2024-07-24 21:33:49.636958] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:04.709 [2024-07-24 21:33:49.637115] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:12:04.709 [2024-07-24 21:33:49.637263] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:12:04.709 [2024-07-24 21:33:49.637566] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:12:04.709 [2024-07-24 21:33:49.637572] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:04.709 [2024-07-24 21:33:49.641435] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:05.274 21:33:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:05.274 21:33:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # return 0 00:12:05.274 21:33:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:05.274 21:33:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:05.274 21:33:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:05.533 21:33:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:05.533 21:33:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:05.533 21:33:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.533 21:33:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:05.533 [2024-07-24 21:33:50.318803] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:05.533 21:33:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.533 21:33:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:05.533 21:33:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.533 21:33:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:05.533 Malloc0 00:12:05.533 21:33:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.533 21:33:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:05.533 21:33:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.533 21:33:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:05.533 21:33:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.533 21:33:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:05.533 21:33:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.533 21:33:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:05.533 21:33:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.533 21:33:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:05.533 21:33:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.533 21:33:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:05.533 [2024-07-24 21:33:50.362969] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:05.533 21:33:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.533 21:33:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:12:05.533 21:33:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:12:05.533 21:33:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:12:05.533 21:33:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:12:05.533 21:33:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:05.533 21:33:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:05.533 { 00:12:05.533 "params": { 00:12:05.533 "name": "Nvme$subsystem", 00:12:05.533 "trtype": "$TEST_TRANSPORT", 00:12:05.533 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:05.533 "adrfam": "ipv4", 00:12:05.533 "trsvcid": "$NVMF_PORT", 00:12:05.533 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:05.533 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:05.533 "hdgst": ${hdgst:-false}, 00:12:05.533 "ddgst": ${ddgst:-false} 00:12:05.533 }, 00:12:05.533 "method": "bdev_nvme_attach_controller" 00:12:05.533 } 00:12:05.533 EOF 00:12:05.533 )") 00:12:05.533 21:33:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:12:05.533 21:33:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:12:05.533 21:33:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:12:05.533 21:33:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:05.533 "params": { 00:12:05.533 "name": "Nvme1", 00:12:05.533 "trtype": "tcp", 00:12:05.533 "traddr": "10.0.0.2", 00:12:05.533 "adrfam": "ipv4", 00:12:05.533 "trsvcid": "4420", 00:12:05.533 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:05.533 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:05.533 "hdgst": false, 00:12:05.533 "ddgst": false 00:12:05.533 }, 00:12:05.533 "method": "bdev_nvme_attach_controller" 00:12:05.533 }' 00:12:05.533 [2024-07-24 21:33:50.420544] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:12:05.533 [2024-07-24 21:33:50.420655] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid71657 ] 00:12:05.791 [2024-07-24 21:33:50.567176] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:05.791 [2024-07-24 21:33:50.710930] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:05.791 [2024-07-24 21:33:50.711087] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:05.791 [2024-07-24 21:33:50.711100] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:05.792 [2024-07-24 21:33:50.725077] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:06.050 I/O targets: 00:12:06.050 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:12:06.050 00:12:06.050 00:12:06.050 CUnit - A unit testing framework for C - Version 2.1-3 00:12:06.050 http://cunit.sourceforge.net/ 00:12:06.050 00:12:06.050 00:12:06.050 Suite: bdevio tests on: Nvme1n1 00:12:06.050 Test: blockdev write read block ...passed 00:12:06.050 Test: blockdev write zeroes read block ...passed 00:12:06.050 Test: blockdev write zeroes read no split ...passed 00:12:06.050 Test: blockdev write zeroes read split ...passed 00:12:06.050 Test: blockdev write zeroes read split partial ...passed 00:12:06.050 Test: blockdev reset ...[2024-07-24 21:33:50.921326] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:12:06.050 [2024-07-24 21:33:50.921436] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb1a870 (9): Bad file descriptor 00:12:06.050 [2024-07-24 21:33:50.932578] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:12:06.050 passed 00:12:06.050 Test: blockdev write read 8 blocks ...passed 00:12:06.050 Test: blockdev write read size > 128k ...passed 00:12:06.050 Test: blockdev write read invalid size ...passed 00:12:06.050 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:06.050 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:06.050 Test: blockdev write read max offset ...passed 00:12:06.050 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:06.050 Test: blockdev writev readv 8 blocks ...passed 00:12:06.050 Test: blockdev writev readv 30 x 1block ...passed 00:12:06.050 Test: blockdev writev readv block ...passed 00:12:06.050 Test: blockdev writev readv size > 128k ...passed 00:12:06.050 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:06.050 Test: blockdev comparev and writev ...[2024-07-24 21:33:50.940347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:06.050 [2024-07-24 21:33:50.940401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:12:06.050 [2024-07-24 21:33:50.940431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:06.050 [2024-07-24 21:33:50.940442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:12:06.050 [2024-07-24 21:33:50.940779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:06.050 [2024-07-24 21:33:50.940809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:12:06.050 [2024-07-24 21:33:50.940826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:06.050 [2024-07-24 21:33:50.940836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:12:06.050 [2024-07-24 21:33:50.941403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:06.050 [2024-07-24 21:33:50.941445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:12:06.050 [2024-07-24 21:33:50.941473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:06.050 [2024-07-24 21:33:50.941483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:12:06.050 [2024-07-24 21:33:50.941898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:06.050 [2024-07-24 21:33:50.941926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:12:06.050 [2024-07-24 21:33:50.941944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:06.050 [2024-07-24 21:33:50.941954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:12:06.050 passed 00:12:06.050 Test: blockdev nvme passthru rw ...passed 00:12:06.050 Test: blockdev nvme passthru vendor specific ...[2024-07-24 21:33:50.943089] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:06.050 [2024-07-24 21:33:50.943115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:12:06.050 [2024-07-24 21:33:50.943244] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:06.050 [2024-07-24 21:33:50.943260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:12:06.050 [2024-07-24 21:33:50.943375] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:06.050 [2024-07-24 21:33:50.943390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:12:06.050 [2024-07-24 21:33:50.943496] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:06.050 [2024-07-24 21:33:50.943511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:12:06.050 passed 00:12:06.050 Test: blockdev nvme admin passthru ...passed 00:12:06.050 Test: blockdev copy ...passed 00:12:06.050 00:12:06.050 Run Summary: Type Total Ran Passed Failed Inactive 00:12:06.050 suites 1 1 n/a 0 0 00:12:06.050 tests 23 23 23 0 0 00:12:06.050 asserts 152 152 152 0 n/a 00:12:06.050 00:12:06.050 Elapsed time = 0.156 seconds 00:12:06.309 21:33:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:06.309 21:33:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.309 21:33:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:06.309 21:33:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.309 21:33:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:12:06.309 21:33:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:12:06.309 21:33:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:06.309 21:33:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:12:06.567 21:33:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:06.567 21:33:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:12:06.567 21:33:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:06.567 21:33:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:06.567 rmmod nvme_tcp 00:12:06.567 rmmod nvme_fabrics 00:12:06.567 rmmod nvme_keyring 00:12:06.567 21:33:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:06.567 21:33:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:12:06.567 21:33:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:12:06.567 21:33:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 71620 ']' 00:12:06.567 21:33:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 71620 00:12:06.567 21:33:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # '[' -z 71620 ']' 00:12:06.567 21:33:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # kill -0 71620 00:12:06.567 21:33:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # uname 00:12:06.567 21:33:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:06.567 21:33:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71620 00:12:06.567 21:33:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:12:06.567 21:33:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:12:06.567 21:33:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71620' 00:12:06.567 killing process with pid 71620 00:12:06.567 21:33:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@969 -- # kill 71620 00:12:06.567 21:33:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@974 -- # wait 71620 00:12:06.824 21:33:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:06.824 21:33:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:06.824 21:33:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:06.824 21:33:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:06.824 21:33:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:06.824 21:33:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:06.824 21:33:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:06.824 21:33:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:07.082 21:33:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:12:07.082 00:12:07.082 real 0m2.970s 00:12:07.082 user 0m9.734s 00:12:07.082 sys 0m1.170s 00:12:07.082 21:33:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:07.082 ************************************ 00:12:07.082 END TEST nvmf_bdevio_no_huge 00:12:07.082 ************************************ 00:12:07.082 21:33:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:07.082 21:33:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:12:07.082 21:33:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:07.082 21:33:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:07.082 21:33:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:07.082 ************************************ 00:12:07.082 START TEST nvmf_tls 00:12:07.082 ************************************ 00:12:07.082 21:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:12:07.082 * Looking for test storage... 00:12:07.082 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:07.082 21:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:07.082 21:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:12:07.082 21:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:07.082 21:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:07.082 21:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:07.082 21:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:07.082 21:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:07.082 21:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:07.082 21:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:07.082 21:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:07.082 21:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:07.082 21:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:07.082 21:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:12:07.082 21:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:12:07.082 21:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:07.082 21:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:07.082 21:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:07.082 21:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:07.082 21:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:07.082 21:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:07.082 21:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:07.082 21:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:07.082 21:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.082 21:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.082 21:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.082 21:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:12:07.082 21:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.082 21:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:12:07.082 21:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:07.082 21:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:07.082 21:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:07.082 21:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:07.082 21:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:07.082 21:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:07.082 21:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:07.082 21:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:07.082 21:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:07.082 21:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:12:07.082 21:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:07.082 21:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:07.082 21:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:07.082 21:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:07.082 21:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:07.082 21:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:07.082 21:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:07.082 21:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:07.082 21:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:12:07.082 21:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:12:07.082 21:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:12:07.082 21:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:12:07.082 21:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:12:07.082 21:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # nvmf_veth_init 00:12:07.082 21:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:07.082 21:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:07.082 21:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:07.082 21:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:07.082 21:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:07.082 21:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:07.082 21:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:07.082 21:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:07.082 21:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:07.082 21:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:07.082 21:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:07.082 21:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:07.082 21:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:07.082 21:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:07.082 Cannot find device "nvmf_tgt_br" 00:12:07.082 21:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@155 -- # true 00:12:07.082 21:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:07.082 Cannot find device "nvmf_tgt_br2" 00:12:07.082 21:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # true 00:12:07.082 21:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:07.082 21:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:07.082 Cannot find device "nvmf_tgt_br" 00:12:07.082 21:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # true 00:12:07.082 21:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:07.082 Cannot find device "nvmf_tgt_br2" 00:12:07.082 21:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # true 00:12:07.082 21:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:07.341 21:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:07.341 21:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:07.341 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:07.341 21:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # true 00:12:07.341 21:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:07.341 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:07.341 21:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # true 00:12:07.341 21:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:07.341 21:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:07.341 21:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:07.341 21:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:07.341 21:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:07.341 21:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:07.341 21:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:07.341 21:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:07.341 21:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:07.341 21:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:07.341 21:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:07.341 21:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:07.341 21:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:07.341 21:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:07.341 21:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:07.341 21:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:07.341 21:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:07.341 21:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:07.341 21:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:07.341 21:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:07.341 21:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:07.341 21:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:07.341 21:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:07.341 21:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:07.341 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:07.341 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:12:07.341 00:12:07.341 --- 10.0.0.2 ping statistics --- 00:12:07.341 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:07.341 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:12:07.341 21:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:07.341 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:07.341 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:12:07.341 00:12:07.341 --- 10.0.0.3 ping statistics --- 00:12:07.341 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:07.341 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:12:07.341 21:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:07.341 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:07.341 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:12:07.341 00:12:07.341 --- 10.0.0.1 ping statistics --- 00:12:07.341 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:07.341 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:12:07.341 21:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:07.341 21:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@433 -- # return 0 00:12:07.341 21:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:07.341 21:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:07.341 21:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:07.341 21:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:07.341 21:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:07.341 21:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:07.341 21:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:07.341 21:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:12:07.341 21:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:07.341 21:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:07.342 21:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:07.342 21:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=71836 00:12:07.342 21:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 71836 00:12:07.342 21:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:12:07.342 21:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 71836 ']' 00:12:07.342 21:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:07.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:07.342 21:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:07.342 21:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:07.342 21:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:07.342 21:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:07.600 [2024-07-24 21:33:52.384926] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:12:07.600 [2024-07-24 21:33:52.385014] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:07.600 [2024-07-24 21:33:52.530781] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:07.858 [2024-07-24 21:33:52.664851] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:07.858 [2024-07-24 21:33:52.664944] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:07.858 [2024-07-24 21:33:52.664961] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:07.858 [2024-07-24 21:33:52.664972] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:07.858 [2024-07-24 21:33:52.664982] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:07.858 [2024-07-24 21:33:52.665021] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:08.424 21:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:08.424 21:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:12:08.424 21:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:08.424 21:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:08.424 21:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:08.424 21:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:08.424 21:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:12:08.424 21:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:12:08.681 true 00:12:08.681 21:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:08.681 21:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:12:08.938 21:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # version=0 00:12:08.938 21:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:12:08.938 21:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:12:09.195 21:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:09.195 21:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:12:09.453 21:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # version=13 00:12:09.453 21:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:12:09.453 21:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:12:09.710 21:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:09.710 21:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:12:09.968 21:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # version=7 00:12:09.968 21:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:12:09.968 21:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:09.968 21:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:12:09.968 21:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:12:09.968 21:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:12:09.968 21:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:12:10.226 21:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:10.226 21:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:12:10.484 21:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:12:10.484 21:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:12:10.484 21:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:12:10.742 21:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:10.742 21:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:12:11.000 21:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:12:11.000 21:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:12:11.000 21:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:12:11.000 21:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:12:11.000 21:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:12:11.000 21:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:12:11.000 21:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:12:11.000 21:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:12:11.001 21:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:12:11.001 21:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:12:11.001 21:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:12:11.001 21:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:12:11.001 21:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:12:11.001 21:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:12:11.001 21:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:12:11.001 21:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:12:11.001 21:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:12:11.001 21:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:12:11.001 21:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:12:11.001 21:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.9IBgsrBZ5G 00:12:11.001 21:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:12:11.001 21:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.5XfT8Zl2vi 00:12:11.001 21:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:12:11.001 21:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:12:11.001 21:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.9IBgsrBZ5G 00:12:11.001 21:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.5XfT8Zl2vi 00:12:11.001 21:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@130 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:12:11.259 21:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:12:11.518 [2024-07-24 21:33:56.494143] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:11.776 21:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.9IBgsrBZ5G 00:12:11.777 21:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.9IBgsrBZ5G 00:12:11.777 21:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:12:12.035 [2024-07-24 21:33:56.798016] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:12.035 21:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:12:12.035 21:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:12:12.293 [2024-07-24 21:33:57.222047] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:12:12.294 [2024-07-24 21:33:57.222251] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:12.294 21:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:12:12.552 malloc0 00:12:12.552 21:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:12:12.811 21:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.9IBgsrBZ5G 00:12:13.070 [2024-07-24 21:33:57.923803] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:12:13.070 21:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@137 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.9IBgsrBZ5G 00:12:25.276 Initializing NVMe Controllers 00:12:25.277 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:25.277 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:12:25.277 Initialization complete. Launching workers. 00:12:25.277 ======================================================== 00:12:25.277 Latency(us) 00:12:25.277 Device Information : IOPS MiB/s Average min max 00:12:25.277 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11276.29 44.05 5676.83 952.90 7449.57 00:12:25.277 ======================================================== 00:12:25.277 Total : 11276.29 44.05 5676.83 952.90 7449.57 00:12:25.277 00:12:25.277 21:34:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.9IBgsrBZ5G 00:12:25.277 21:34:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:12:25.277 21:34:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:12:25.277 21:34:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:12:25.277 21:34:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.9IBgsrBZ5G' 00:12:25.277 21:34:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:25.277 21:34:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72061 00:12:25.277 21:34:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:25.277 21:34:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:25.277 21:34:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72061 /var/tmp/bdevperf.sock 00:12:25.277 21:34:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72061 ']' 00:12:25.277 21:34:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:25.277 21:34:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:25.277 21:34:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:25.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:25.277 21:34:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:25.277 21:34:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:25.277 [2024-07-24 21:34:08.176538] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:12:25.277 [2024-07-24 21:34:08.176897] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72061 ] 00:12:25.277 [2024-07-24 21:34:08.318451] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:25.277 [2024-07-24 21:34:08.426931] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:25.277 [2024-07-24 21:34:08.497406] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:25.277 21:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:25.277 21:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:12:25.277 21:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.9IBgsrBZ5G 00:12:25.277 [2024-07-24 21:34:09.265408] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:25.277 [2024-07-24 21:34:09.267014] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:12:25.277 TLSTESTn1 00:12:25.277 21:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:12:25.277 Running I/O for 10 seconds... 00:12:35.253 00:12:35.253 Latency(us) 00:12:35.253 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:35.253 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:12:35.253 Verification LBA range: start 0x0 length 0x2000 00:12:35.253 TLSTESTn1 : 10.01 4715.49 18.42 0.00 0.00 27096.72 5868.45 31457.28 00:12:35.253 =================================================================================================================== 00:12:35.253 Total : 4715.49 18.42 0.00 0.00 27096.72 5868.45 31457.28 00:12:35.253 0 00:12:35.253 21:34:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:35.253 21:34:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 72061 00:12:35.253 21:34:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72061 ']' 00:12:35.253 21:34:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72061 00:12:35.253 21:34:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:12:35.253 21:34:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:35.253 21:34:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72061 00:12:35.253 killing process with pid 72061 00:12:35.253 Received shutdown signal, test time was about 10.000000 seconds 00:12:35.253 00:12:35.253 Latency(us) 00:12:35.253 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:35.253 =================================================================================================================== 00:12:35.253 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:35.253 21:34:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:12:35.253 21:34:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:12:35.253 21:34:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72061' 00:12:35.253 21:34:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72061 00:12:35.253 [2024-07-24 21:34:19.496085] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:12:35.253 21:34:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72061 00:12:35.253 21:34:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.5XfT8Zl2vi 00:12:35.253 21:34:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:12:35.253 21:34:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.5XfT8Zl2vi 00:12:35.253 21:34:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:12:35.253 21:34:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:35.253 21:34:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:12:35.253 21:34:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:35.253 21:34:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.5XfT8Zl2vi 00:12:35.253 21:34:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:12:35.253 21:34:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:12:35.253 21:34:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:12:35.253 21:34:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.5XfT8Zl2vi' 00:12:35.253 21:34:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:35.253 21:34:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:35.253 21:34:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72200 00:12:35.253 21:34:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:35.253 21:34:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72200 /var/tmp/bdevperf.sock 00:12:35.253 21:34:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72200 ']' 00:12:35.253 21:34:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:35.253 21:34:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:35.253 21:34:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:35.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:35.253 21:34:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:35.253 21:34:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:35.253 [2024-07-24 21:34:19.748105] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:12:35.253 [2024-07-24 21:34:19.748365] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72200 ] 00:12:35.253 [2024-07-24 21:34:19.871822] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:35.253 [2024-07-24 21:34:19.944436] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:35.253 [2024-07-24 21:34:19.994796] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:35.253 21:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:35.253 21:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:12:35.253 21:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.5XfT8Zl2vi 00:12:35.253 [2024-07-24 21:34:20.247855] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:35.253 [2024-07-24 21:34:20.248179] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:12:35.512 [2024-07-24 21:34:20.254358] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:12:35.512 [2024-07-24 21:34:20.254737] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a201f0 (107): Transport endpoint is not connected 00:12:35.512 [2024-07-24 21:34:20.255725] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a201f0 (9): Bad file descriptor 00:12:35.512 [2024-07-24 21:34:20.256734] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:12:35.512 [2024-07-24 21:34:20.256764] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:12:35.512 [2024-07-24 21:34:20.256778] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:12:35.512 request: 00:12:35.512 { 00:12:35.513 "name": "TLSTEST", 00:12:35.513 "trtype": "tcp", 00:12:35.513 "traddr": "10.0.0.2", 00:12:35.513 "adrfam": "ipv4", 00:12:35.513 "trsvcid": "4420", 00:12:35.513 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:35.513 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:35.513 "prchk_reftag": false, 00:12:35.513 "prchk_guard": false, 00:12:35.513 "hdgst": false, 00:12:35.513 "ddgst": false, 00:12:35.513 "psk": "/tmp/tmp.5XfT8Zl2vi", 00:12:35.513 "method": "bdev_nvme_attach_controller", 00:12:35.513 "req_id": 1 00:12:35.513 } 00:12:35.513 Got JSON-RPC error response 00:12:35.513 response: 00:12:35.513 { 00:12:35.513 "code": -5, 00:12:35.513 "message": "Input/output error" 00:12:35.513 } 00:12:35.513 21:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 72200 00:12:35.513 21:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72200 ']' 00:12:35.513 21:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72200 00:12:35.513 21:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:12:35.513 21:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:35.513 21:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72200 00:12:35.513 killing process with pid 72200 00:12:35.513 Received shutdown signal, test time was about 10.000000 seconds 00:12:35.513 00:12:35.513 Latency(us) 00:12:35.513 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:35.513 =================================================================================================================== 00:12:35.513 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:35.513 21:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:12:35.513 21:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:12:35.513 21:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72200' 00:12:35.513 21:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72200 00:12:35.513 [2024-07-24 21:34:20.306104] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:12:35.513 21:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72200 00:12:35.513 21:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:12:35.513 21:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:12:35.513 21:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:35.513 21:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:35.513 21:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:35.513 21:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.9IBgsrBZ5G 00:12:35.513 21:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:12:35.513 21:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.9IBgsrBZ5G 00:12:35.513 21:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:12:35.513 21:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:35.513 21:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:12:35.513 21:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:35.513 21:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.9IBgsrBZ5G 00:12:35.513 21:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:12:35.513 21:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:12:35.513 21:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:12:35.513 21:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.9IBgsrBZ5G' 00:12:35.513 21:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:35.513 21:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72209 00:12:35.513 21:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:35.513 21:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:35.513 21:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72209 /var/tmp/bdevperf.sock 00:12:35.513 21:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72209 ']' 00:12:35.513 21:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:35.513 21:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:35.513 21:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:35.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:35.513 21:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:35.513 21:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:35.772 [2024-07-24 21:34:20.540569] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:12:35.772 [2024-07-24 21:34:20.540837] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72209 ] 00:12:35.772 [2024-07-24 21:34:20.674630] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:35.772 [2024-07-24 21:34:20.770597] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:36.031 [2024-07-24 21:34:20.821638] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:36.597 21:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:36.597 21:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:12:36.598 21:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.9IBgsrBZ5G 00:12:36.857 [2024-07-24 21:34:21.667120] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:36.857 [2024-07-24 21:34:21.667395] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:12:36.857 [2024-07-24 21:34:21.674352] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:12:36.857 [2024-07-24 21:34:21.674614] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:12:36.857 [2024-07-24 21:34:21.674905] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:12:36.857 [2024-07-24 21:34:21.675758] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21011f0 (107): Transport endpoint is not connected 00:12:36.857 [2024-07-24 21:34:21.676749] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21011f0 (9): Bad file descriptor 00:12:36.857 [2024-07-24 21:34:21.677746] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:12:36.857 [2024-07-24 21:34:21.677898] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:12:36.857 [2024-07-24 21:34:21.677919] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:12:36.857 request: 00:12:36.857 { 00:12:36.857 "name": "TLSTEST", 00:12:36.857 "trtype": "tcp", 00:12:36.857 "traddr": "10.0.0.2", 00:12:36.857 "adrfam": "ipv4", 00:12:36.857 "trsvcid": "4420", 00:12:36.857 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:36.857 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:12:36.857 "prchk_reftag": false, 00:12:36.857 "prchk_guard": false, 00:12:36.857 "hdgst": false, 00:12:36.857 "ddgst": false, 00:12:36.857 "psk": "/tmp/tmp.9IBgsrBZ5G", 00:12:36.857 "method": "bdev_nvme_attach_controller", 00:12:36.857 "req_id": 1 00:12:36.857 } 00:12:36.857 Got JSON-RPC error response 00:12:36.857 response: 00:12:36.857 { 00:12:36.857 "code": -5, 00:12:36.857 "message": "Input/output error" 00:12:36.857 } 00:12:36.857 21:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 72209 00:12:36.857 21:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72209 ']' 00:12:36.857 21:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72209 00:12:36.857 21:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:12:36.857 21:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:36.857 21:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72209 00:12:36.857 killing process with pid 72209 00:12:36.857 Received shutdown signal, test time was about 10.000000 seconds 00:12:36.857 00:12:36.857 Latency(us) 00:12:36.857 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:36.857 =================================================================================================================== 00:12:36.857 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:36.857 21:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:12:36.857 21:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:12:36.857 21:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72209' 00:12:36.857 21:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72209 00:12:36.857 [2024-07-24 21:34:21.715280] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:12:36.858 21:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72209 00:12:37.117 21:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:12:37.117 21:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:12:37.117 21:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:37.117 21:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:37.117 21:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:37.117 21:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.9IBgsrBZ5G 00:12:37.117 21:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:12:37.117 21:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.9IBgsrBZ5G 00:12:37.117 21:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:12:37.117 21:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:37.117 21:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:12:37.117 21:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:37.117 21:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.9IBgsrBZ5G 00:12:37.117 21:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:12:37.117 21:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:12:37.117 21:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:12:37.117 21:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.9IBgsrBZ5G' 00:12:37.117 21:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:37.117 21:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72242 00:12:37.117 21:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:37.117 21:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:37.117 21:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72242 /var/tmp/bdevperf.sock 00:12:37.117 21:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72242 ']' 00:12:37.118 21:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:37.118 21:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:37.118 21:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:37.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:37.118 21:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:37.118 21:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:37.118 [2024-07-24 21:34:21.970432] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:12:37.118 [2024-07-24 21:34:21.970892] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72242 ] 00:12:37.118 [2024-07-24 21:34:22.110238] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:37.377 [2024-07-24 21:34:22.193847] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:37.377 [2024-07-24 21:34:22.244025] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:37.945 21:34:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:37.945 21:34:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:12:37.945 21:34:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.9IBgsrBZ5G 00:12:38.218 [2024-07-24 21:34:23.093226] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:38.218 [2024-07-24 21:34:23.093498] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:12:38.218 [2024-07-24 21:34:23.104410] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:12:38.218 [2024-07-24 21:34:23.104445] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:12:38.218 [2024-07-24 21:34:23.104488] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:12:38.218 [2024-07-24 21:34:23.105018] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8f1f0 (107): Transport endpoint is not connected 00:12:38.218 [2024-07-24 21:34:23.106006] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8f1f0 (9): Bad file descriptor 00:12:38.218 [2024-07-24 21:34:23.106989] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:12:38.218 [2024-07-24 21:34:23.107012] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:12:38.218 [2024-07-24 21:34:23.107056] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:12:38.218 request: 00:12:38.218 { 00:12:38.218 "name": "TLSTEST", 00:12:38.218 "trtype": "tcp", 00:12:38.218 "traddr": "10.0.0.2", 00:12:38.218 "adrfam": "ipv4", 00:12:38.218 "trsvcid": "4420", 00:12:38.218 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:12:38.218 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:38.218 "prchk_reftag": false, 00:12:38.218 "prchk_guard": false, 00:12:38.218 "hdgst": false, 00:12:38.218 "ddgst": false, 00:12:38.218 "psk": "/tmp/tmp.9IBgsrBZ5G", 00:12:38.218 "method": "bdev_nvme_attach_controller", 00:12:38.218 "req_id": 1 00:12:38.218 } 00:12:38.218 Got JSON-RPC error response 00:12:38.218 response: 00:12:38.218 { 00:12:38.218 "code": -5, 00:12:38.218 "message": "Input/output error" 00:12:38.218 } 00:12:38.218 21:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 72242 00:12:38.218 21:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72242 ']' 00:12:38.218 21:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72242 00:12:38.218 21:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:12:38.218 21:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:38.218 21:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72242 00:12:38.218 killing process with pid 72242 00:12:38.218 Received shutdown signal, test time was about 10.000000 seconds 00:12:38.218 00:12:38.218 Latency(us) 00:12:38.218 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:38.218 =================================================================================================================== 00:12:38.218 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:38.218 21:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:12:38.218 21:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:12:38.218 21:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72242' 00:12:38.218 21:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72242 00:12:38.218 [2024-07-24 21:34:23.144810] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:12:38.218 21:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72242 00:12:38.508 21:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:12:38.508 21:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:12:38.508 21:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:38.508 21:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:38.508 21:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:38.508 21:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:12:38.508 21:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:12:38.508 21:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:12:38.508 21:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:12:38.508 21:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:38.508 21:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:12:38.508 21:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:38.508 21:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:12:38.508 21:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:12:38.508 21:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:12:38.508 21:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:12:38.508 21:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:12:38.508 21:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:38.508 21:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72265 00:12:38.508 21:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:38.508 21:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:38.508 21:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72265 /var/tmp/bdevperf.sock 00:12:38.508 21:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72265 ']' 00:12:38.508 21:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:38.508 21:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:38.508 21:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:38.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:38.508 21:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:38.508 21:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:38.508 [2024-07-24 21:34:23.392124] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:12:38.508 [2024-07-24 21:34:23.392396] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72265 ] 00:12:38.783 [2024-07-24 21:34:23.525709] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:38.783 [2024-07-24 21:34:23.601220] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:38.783 [2024-07-24 21:34:23.651256] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:39.351 21:34:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:39.351 21:34:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:12:39.351 21:34:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:12:39.610 [2024-07-24 21:34:24.424529] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:12:39.610 [2024-07-24 21:34:24.426657] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x142fc00 (9): Bad file descriptor 00:12:39.610 [2024-07-24 21:34:24.427653] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:12:39.610 [2024-07-24 21:34:24.427684] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:12:39.610 [2024-07-24 21:34:24.427699] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:12:39.610 request: 00:12:39.610 { 00:12:39.610 "name": "TLSTEST", 00:12:39.610 "trtype": "tcp", 00:12:39.610 "traddr": "10.0.0.2", 00:12:39.610 "adrfam": "ipv4", 00:12:39.610 "trsvcid": "4420", 00:12:39.610 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:39.610 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:39.610 "prchk_reftag": false, 00:12:39.610 "prchk_guard": false, 00:12:39.610 "hdgst": false, 00:12:39.610 "ddgst": false, 00:12:39.610 "method": "bdev_nvme_attach_controller", 00:12:39.610 "req_id": 1 00:12:39.610 } 00:12:39.610 Got JSON-RPC error response 00:12:39.610 response: 00:12:39.610 { 00:12:39.610 "code": -5, 00:12:39.610 "message": "Input/output error" 00:12:39.610 } 00:12:39.610 21:34:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 72265 00:12:39.610 21:34:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72265 ']' 00:12:39.610 21:34:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72265 00:12:39.610 21:34:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:12:39.610 21:34:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:39.610 21:34:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72265 00:12:39.610 killing process with pid 72265 00:12:39.610 Received shutdown signal, test time was about 10.000000 seconds 00:12:39.610 00:12:39.610 Latency(us) 00:12:39.610 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:39.610 =================================================================================================================== 00:12:39.610 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:39.610 21:34:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:12:39.610 21:34:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:12:39.610 21:34:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72265' 00:12:39.610 21:34:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72265 00:12:39.610 21:34:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72265 00:12:39.869 21:34:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:12:39.869 21:34:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:12:39.869 21:34:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:39.869 21:34:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:39.869 21:34:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:39.869 21:34:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@158 -- # killprocess 71836 00:12:39.869 21:34:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 71836 ']' 00:12:39.869 21:34:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 71836 00:12:39.869 21:34:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:12:39.869 21:34:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:39.869 21:34:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71836 00:12:39.869 killing process with pid 71836 00:12:39.869 21:34:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:12:39.869 21:34:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:12:39.869 21:34:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71836' 00:12:39.869 21:34:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 71836 00:12:39.869 [2024-07-24 21:34:24.666166] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:12:39.869 21:34:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 71836 00:12:40.127 21:34:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:12:40.127 21:34:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:12:40.127 21:34:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:12:40.127 21:34:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:12:40.127 21:34:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:12:40.127 21:34:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:12:40.127 21:34:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:12:40.127 21:34:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:12:40.127 21:34:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:12:40.127 21:34:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.djUwK7BrNP 00:12:40.127 21:34:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:12:40.127 21:34:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.djUwK7BrNP 00:12:40.127 21:34:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:12:40.127 21:34:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:40.127 21:34:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:40.127 21:34:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:40.127 21:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=72304 00:12:40.127 21:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 72304 00:12:40.127 21:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72304 ']' 00:12:40.127 21:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:40.127 21:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:40.127 21:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:40.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:40.127 21:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:40.127 21:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:40.127 21:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:40.127 [2024-07-24 21:34:25.060463] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:12:40.127 [2024-07-24 21:34:25.060547] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:40.385 [2024-07-24 21:34:25.190180] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:40.385 [2024-07-24 21:34:25.271059] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:40.385 [2024-07-24 21:34:25.271130] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:40.385 [2024-07-24 21:34:25.271142] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:40.385 [2024-07-24 21:34:25.271149] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:40.385 [2024-07-24 21:34:25.271155] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:40.385 [2024-07-24 21:34:25.271190] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:40.385 [2024-07-24 21:34:25.340131] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:41.317 21:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:41.317 21:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:12:41.317 21:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:41.317 21:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:41.317 21:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:41.317 21:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:41.317 21:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.djUwK7BrNP 00:12:41.318 21:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.djUwK7BrNP 00:12:41.318 21:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:12:41.318 [2024-07-24 21:34:26.168440] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:41.318 21:34:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:12:41.576 21:34:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:12:41.834 [2024-07-24 21:34:26.604513] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:12:41.834 [2024-07-24 21:34:26.604735] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:41.834 21:34:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:12:41.834 malloc0 00:12:41.834 21:34:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:12:42.092 21:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.djUwK7BrNP 00:12:42.351 [2024-07-24 21:34:27.230347] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:12:42.351 21:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.djUwK7BrNP 00:12:42.351 21:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:12:42.351 21:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:12:42.351 21:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:12:42.351 21:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.djUwK7BrNP' 00:12:42.351 21:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:42.351 21:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72353 00:12:42.351 21:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:42.351 21:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:42.351 21:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72353 /var/tmp/bdevperf.sock 00:12:42.351 21:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72353 ']' 00:12:42.351 21:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:42.351 21:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:42.351 21:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:42.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:42.351 21:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:42.351 21:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:42.351 [2024-07-24 21:34:27.284525] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:12:42.351 [2024-07-24 21:34:27.284596] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72353 ] 00:12:42.609 [2024-07-24 21:34:27.410047] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:42.609 [2024-07-24 21:34:27.511363] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:42.609 [2024-07-24 21:34:27.563624] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:43.177 21:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:43.177 21:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:12:43.177 21:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.djUwK7BrNP 00:12:43.436 [2024-07-24 21:34:28.409454] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:43.436 [2024-07-24 21:34:28.409743] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:12:43.694 TLSTESTn1 00:12:43.694 21:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:12:43.694 Running I/O for 10 seconds... 00:12:53.670 00:12:53.670 Latency(us) 00:12:53.670 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:53.670 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:12:53.670 Verification LBA range: start 0x0 length 0x2000 00:12:53.670 TLSTESTn1 : 10.01 4715.99 18.42 0.00 0.00 27094.97 5540.77 23592.96 00:12:53.670 =================================================================================================================== 00:12:53.670 Total : 4715.99 18.42 0.00 0.00 27094.97 5540.77 23592.96 00:12:53.670 0 00:12:53.670 21:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:53.670 21:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 72353 00:12:53.670 21:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72353 ']' 00:12:53.670 21:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72353 00:12:53.670 21:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:12:53.670 21:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:53.670 21:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72353 00:12:53.670 killing process with pid 72353 00:12:53.670 Received shutdown signal, test time was about 10.000000 seconds 00:12:53.670 00:12:53.670 Latency(us) 00:12:53.670 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:53.670 =================================================================================================================== 00:12:53.670 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:53.670 21:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:12:53.670 21:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:12:53.670 21:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72353' 00:12:53.670 21:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72353 00:12:53.670 [2024-07-24 21:34:38.636170] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:12:53.670 21:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72353 00:12:53.929 21:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.djUwK7BrNP 00:12:53.929 21:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.djUwK7BrNP 00:12:53.929 21:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:12:53.929 21:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.djUwK7BrNP 00:12:53.929 21:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:12:53.929 21:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:53.929 21:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:12:53.929 21:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:53.929 21:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.djUwK7BrNP 00:12:53.929 21:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:12:53.929 21:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:12:53.929 21:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:12:53.929 21:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.djUwK7BrNP' 00:12:53.929 21:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:53.929 21:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:53.929 21:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72486 00:12:53.929 21:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:53.929 21:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72486 /var/tmp/bdevperf.sock 00:12:53.929 21:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72486 ']' 00:12:53.929 21:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:53.929 21:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:53.929 21:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:53.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:53.929 21:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:53.929 21:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:53.929 [2024-07-24 21:34:38.886035] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:12:53.929 [2024-07-24 21:34:38.886306] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72486 ] 00:12:54.187 [2024-07-24 21:34:39.011500] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:54.187 [2024-07-24 21:34:39.098076] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:54.187 [2024-07-24 21:34:39.147803] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:55.122 21:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:55.122 21:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:12:55.122 21:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.djUwK7BrNP 00:12:55.122 [2024-07-24 21:34:39.992808] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:55.122 [2024-07-24 21:34:39.993080] bdev_nvme.c:6153:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:12:55.122 [2024-07-24 21:34:39.993096] bdev_nvme.c:6258:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.djUwK7BrNP 00:12:55.122 request: 00:12:55.122 { 00:12:55.122 "name": "TLSTEST", 00:12:55.122 "trtype": "tcp", 00:12:55.122 "traddr": "10.0.0.2", 00:12:55.122 "adrfam": "ipv4", 00:12:55.122 "trsvcid": "4420", 00:12:55.122 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:55.122 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:55.122 "prchk_reftag": false, 00:12:55.122 "prchk_guard": false, 00:12:55.122 "hdgst": false, 00:12:55.122 "ddgst": false, 00:12:55.122 "psk": "/tmp/tmp.djUwK7BrNP", 00:12:55.122 "method": "bdev_nvme_attach_controller", 00:12:55.122 "req_id": 1 00:12:55.122 } 00:12:55.122 Got JSON-RPC error response 00:12:55.122 response: 00:12:55.122 { 00:12:55.122 "code": -1, 00:12:55.122 "message": "Operation not permitted" 00:12:55.122 } 00:12:55.122 21:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 72486 00:12:55.122 21:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72486 ']' 00:12:55.123 21:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72486 00:12:55.123 21:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:12:55.123 21:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:55.123 21:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72486 00:12:55.123 killing process with pid 72486 00:12:55.123 Received shutdown signal, test time was about 10.000000 seconds 00:12:55.123 00:12:55.123 Latency(us) 00:12:55.123 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:55.123 =================================================================================================================== 00:12:55.123 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:55.123 21:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:12:55.123 21:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:12:55.123 21:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72486' 00:12:55.123 21:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72486 00:12:55.123 21:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72486 00:12:55.381 21:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:12:55.381 21:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:12:55.381 21:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:55.381 21:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:55.381 21:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:55.381 21:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@174 -- # killprocess 72304 00:12:55.382 21:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72304 ']' 00:12:55.382 21:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72304 00:12:55.382 21:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:12:55.382 21:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:55.382 21:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72304 00:12:55.382 killing process with pid 72304 00:12:55.382 21:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:12:55.382 21:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:12:55.382 21:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72304' 00:12:55.382 21:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72304 00:12:55.382 [2024-07-24 21:34:40.249532] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:12:55.382 21:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72304 00:12:55.640 21:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:12:55.640 21:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:55.641 21:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:55.641 21:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:55.641 21:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=72520 00:12:55.641 21:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:55.641 21:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 72520 00:12:55.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:55.641 21:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72520 ']' 00:12:55.641 21:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:55.641 21:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:55.641 21:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:55.641 21:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:55.641 21:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:55.641 [2024-07-24 21:34:40.582519] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:12:55.641 [2024-07-24 21:34:40.582826] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:55.899 [2024-07-24 21:34:40.721317] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:55.899 [2024-07-24 21:34:40.795751] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:55.899 [2024-07-24 21:34:40.796125] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:55.899 [2024-07-24 21:34:40.796237] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:55.899 [2024-07-24 21:34:40.796250] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:55.899 [2024-07-24 21:34:40.796256] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:55.899 [2024-07-24 21:34:40.796297] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:55.899 [2024-07-24 21:34:40.865127] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:56.836 21:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:56.836 21:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:12:56.836 21:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:56.836 21:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:56.836 21:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:56.836 21:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:56.836 21:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.djUwK7BrNP 00:12:56.836 21:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:12:56.836 21:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.djUwK7BrNP 00:12:56.836 21:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:12:56.836 21:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:56.836 21:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:12:56.836 21:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:56.836 21:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.djUwK7BrNP 00:12:56.836 21:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.djUwK7BrNP 00:12:56.836 21:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:12:56.836 [2024-07-24 21:34:41.728729] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:56.836 21:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:12:57.094 21:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:12:57.353 [2024-07-24 21:34:42.152794] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:12:57.353 [2024-07-24 21:34:42.152997] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:57.353 21:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:12:57.612 malloc0 00:12:57.612 21:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:12:57.871 21:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.djUwK7BrNP 00:12:57.871 [2024-07-24 21:34:42.822368] tcp.c:3635:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:12:57.871 [2024-07-24 21:34:42.822397] tcp.c:3721:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:12:57.871 [2024-07-24 21:34:42.822427] subsystem.c:1052:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:12:57.871 request: 00:12:57.871 { 00:12:57.871 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:57.871 "host": "nqn.2016-06.io.spdk:host1", 00:12:57.871 "psk": "/tmp/tmp.djUwK7BrNP", 00:12:57.871 "method": "nvmf_subsystem_add_host", 00:12:57.871 "req_id": 1 00:12:57.871 } 00:12:57.871 Got JSON-RPC error response 00:12:57.871 response: 00:12:57.871 { 00:12:57.871 "code": -32603, 00:12:57.871 "message": "Internal error" 00:12:57.871 } 00:12:57.871 21:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:12:57.871 21:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:57.871 21:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:57.871 21:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:57.871 21:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@180 -- # killprocess 72520 00:12:57.871 21:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72520 ']' 00:12:57.871 21:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72520 00:12:57.871 21:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:12:57.871 21:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:57.871 21:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72520 00:12:57.871 killing process with pid 72520 00:12:57.871 21:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:12:57.871 21:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:12:57.871 21:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72520' 00:12:57.871 21:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72520 00:12:57.871 21:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72520 00:12:58.130 21:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.djUwK7BrNP 00:12:58.130 21:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:12:58.130 21:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:58.130 21:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:58.130 21:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:58.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:58.389 21:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=72583 00:12:58.389 21:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:58.389 21:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 72583 00:12:58.389 21:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72583 ']' 00:12:58.389 21:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:58.389 21:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:58.389 21:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:58.389 21:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:58.389 21:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:58.389 [2024-07-24 21:34:43.176146] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:12:58.389 [2024-07-24 21:34:43.176397] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:58.389 [2024-07-24 21:34:43.305232] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:58.389 [2024-07-24 21:34:43.379988] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:58.389 [2024-07-24 21:34:43.380358] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:58.389 [2024-07-24 21:34:43.380493] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:58.389 [2024-07-24 21:34:43.380506] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:58.389 [2024-07-24 21:34:43.380512] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:58.389 [2024-07-24 21:34:43.380546] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:58.648 [2024-07-24 21:34:43.450001] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:59.215 21:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:59.215 21:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:12:59.215 21:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:59.215 21:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:59.215 21:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:59.215 21:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:59.215 21:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.djUwK7BrNP 00:12:59.215 21:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.djUwK7BrNP 00:12:59.215 21:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:12:59.474 [2024-07-24 21:34:44.285629] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:59.474 21:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:12:59.733 21:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:12:59.733 [2024-07-24 21:34:44.665667] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:12:59.733 [2024-07-24 21:34:44.665874] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:59.733 21:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:12:59.991 malloc0 00:12:59.991 21:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:00.250 21:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.djUwK7BrNP 00:13:00.509 [2024-07-24 21:34:45.287239] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:13:00.509 21:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=72632 00:13:00.509 21:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@187 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:00.509 21:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:00.509 21:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 72632 /var/tmp/bdevperf.sock 00:13:00.509 21:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72632 ']' 00:13:00.509 21:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:00.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:00.509 21:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:00.509 21:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:00.509 21:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:00.509 21:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:00.509 [2024-07-24 21:34:45.342058] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:13:00.509 [2024-07-24 21:34:45.342149] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72632 ] 00:13:00.509 [2024-07-24 21:34:45.469722] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:00.768 [2024-07-24 21:34:45.557243] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:00.768 [2024-07-24 21:34:45.609697] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:00.768 21:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:00.768 21:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:13:00.768 21:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.djUwK7BrNP 00:13:01.026 [2024-07-24 21:34:45.844550] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:01.026 [2024-07-24 21:34:45.844887] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:13:01.026 TLSTESTn1 00:13:01.026 21:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:13:01.286 21:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:13:01.286 "subsystems": [ 00:13:01.286 { 00:13:01.286 "subsystem": "keyring", 00:13:01.286 "config": [] 00:13:01.286 }, 00:13:01.286 { 00:13:01.286 "subsystem": "iobuf", 00:13:01.286 "config": [ 00:13:01.286 { 00:13:01.286 "method": "iobuf_set_options", 00:13:01.286 "params": { 00:13:01.286 "small_pool_count": 8192, 00:13:01.286 "large_pool_count": 1024, 00:13:01.286 "small_bufsize": 8192, 00:13:01.286 "large_bufsize": 135168 00:13:01.286 } 00:13:01.286 } 00:13:01.286 ] 00:13:01.286 }, 00:13:01.286 { 00:13:01.286 "subsystem": "sock", 00:13:01.286 "config": [ 00:13:01.286 { 00:13:01.286 "method": "sock_set_default_impl", 00:13:01.286 "params": { 00:13:01.286 "impl_name": "uring" 00:13:01.286 } 00:13:01.286 }, 00:13:01.286 { 00:13:01.286 "method": "sock_impl_set_options", 00:13:01.286 "params": { 00:13:01.286 "impl_name": "ssl", 00:13:01.286 "recv_buf_size": 4096, 00:13:01.286 "send_buf_size": 4096, 00:13:01.286 "enable_recv_pipe": true, 00:13:01.286 "enable_quickack": false, 00:13:01.286 "enable_placement_id": 0, 00:13:01.286 "enable_zerocopy_send_server": true, 00:13:01.286 "enable_zerocopy_send_client": false, 00:13:01.286 "zerocopy_threshold": 0, 00:13:01.286 "tls_version": 0, 00:13:01.286 "enable_ktls": false 00:13:01.286 } 00:13:01.286 }, 00:13:01.286 { 00:13:01.286 "method": "sock_impl_set_options", 00:13:01.286 "params": { 00:13:01.286 "impl_name": "posix", 00:13:01.286 "recv_buf_size": 2097152, 00:13:01.286 "send_buf_size": 2097152, 00:13:01.286 "enable_recv_pipe": true, 00:13:01.286 "enable_quickack": false, 00:13:01.286 "enable_placement_id": 0, 00:13:01.286 "enable_zerocopy_send_server": true, 00:13:01.286 "enable_zerocopy_send_client": false, 00:13:01.286 "zerocopy_threshold": 0, 00:13:01.286 "tls_version": 0, 00:13:01.286 "enable_ktls": false 00:13:01.286 } 00:13:01.286 }, 00:13:01.286 { 00:13:01.286 "method": "sock_impl_set_options", 00:13:01.286 "params": { 00:13:01.286 "impl_name": "uring", 00:13:01.286 "recv_buf_size": 2097152, 00:13:01.286 "send_buf_size": 2097152, 00:13:01.286 "enable_recv_pipe": true, 00:13:01.286 "enable_quickack": false, 00:13:01.286 "enable_placement_id": 0, 00:13:01.286 "enable_zerocopy_send_server": false, 00:13:01.286 "enable_zerocopy_send_client": false, 00:13:01.286 "zerocopy_threshold": 0, 00:13:01.286 "tls_version": 0, 00:13:01.286 "enable_ktls": false 00:13:01.286 } 00:13:01.286 } 00:13:01.286 ] 00:13:01.286 }, 00:13:01.286 { 00:13:01.286 "subsystem": "vmd", 00:13:01.286 "config": [] 00:13:01.286 }, 00:13:01.286 { 00:13:01.286 "subsystem": "accel", 00:13:01.286 "config": [ 00:13:01.286 { 00:13:01.286 "method": "accel_set_options", 00:13:01.286 "params": { 00:13:01.286 "small_cache_size": 128, 00:13:01.286 "large_cache_size": 16, 00:13:01.286 "task_count": 2048, 00:13:01.286 "sequence_count": 2048, 00:13:01.286 "buf_count": 2048 00:13:01.286 } 00:13:01.286 } 00:13:01.286 ] 00:13:01.286 }, 00:13:01.286 { 00:13:01.286 "subsystem": "bdev", 00:13:01.286 "config": [ 00:13:01.286 { 00:13:01.286 "method": "bdev_set_options", 00:13:01.286 "params": { 00:13:01.286 "bdev_io_pool_size": 65535, 00:13:01.286 "bdev_io_cache_size": 256, 00:13:01.286 "bdev_auto_examine": true, 00:13:01.286 "iobuf_small_cache_size": 128, 00:13:01.286 "iobuf_large_cache_size": 16 00:13:01.286 } 00:13:01.286 }, 00:13:01.286 { 00:13:01.286 "method": "bdev_raid_set_options", 00:13:01.286 "params": { 00:13:01.286 "process_window_size_kb": 1024, 00:13:01.286 "process_max_bandwidth_mb_sec": 0 00:13:01.286 } 00:13:01.286 }, 00:13:01.286 { 00:13:01.286 "method": "bdev_iscsi_set_options", 00:13:01.286 "params": { 00:13:01.286 "timeout_sec": 30 00:13:01.286 } 00:13:01.286 }, 00:13:01.286 { 00:13:01.286 "method": "bdev_nvme_set_options", 00:13:01.286 "params": { 00:13:01.286 "action_on_timeout": "none", 00:13:01.286 "timeout_us": 0, 00:13:01.286 "timeout_admin_us": 0, 00:13:01.286 "keep_alive_timeout_ms": 10000, 00:13:01.286 "arbitration_burst": 0, 00:13:01.286 "low_priority_weight": 0, 00:13:01.286 "medium_priority_weight": 0, 00:13:01.286 "high_priority_weight": 0, 00:13:01.286 "nvme_adminq_poll_period_us": 10000, 00:13:01.286 "nvme_ioq_poll_period_us": 0, 00:13:01.286 "io_queue_requests": 0, 00:13:01.286 "delay_cmd_submit": true, 00:13:01.286 "transport_retry_count": 4, 00:13:01.286 "bdev_retry_count": 3, 00:13:01.286 "transport_ack_timeout": 0, 00:13:01.286 "ctrlr_loss_timeout_sec": 0, 00:13:01.286 "reconnect_delay_sec": 0, 00:13:01.286 "fast_io_fail_timeout_sec": 0, 00:13:01.286 "disable_auto_failback": false, 00:13:01.286 "generate_uuids": false, 00:13:01.286 "transport_tos": 0, 00:13:01.286 "nvme_error_stat": false, 00:13:01.286 "rdma_srq_size": 0, 00:13:01.286 "io_path_stat": false, 00:13:01.286 "allow_accel_sequence": false, 00:13:01.286 "rdma_max_cq_size": 0, 00:13:01.286 "rdma_cm_event_timeout_ms": 0, 00:13:01.286 "dhchap_digests": [ 00:13:01.286 "sha256", 00:13:01.286 "sha384", 00:13:01.286 "sha512" 00:13:01.286 ], 00:13:01.286 "dhchap_dhgroups": [ 00:13:01.286 "null", 00:13:01.286 "ffdhe2048", 00:13:01.286 "ffdhe3072", 00:13:01.286 "ffdhe4096", 00:13:01.286 "ffdhe6144", 00:13:01.286 "ffdhe8192" 00:13:01.286 ] 00:13:01.286 } 00:13:01.286 }, 00:13:01.286 { 00:13:01.286 "method": "bdev_nvme_set_hotplug", 00:13:01.286 "params": { 00:13:01.286 "period_us": 100000, 00:13:01.286 "enable": false 00:13:01.286 } 00:13:01.286 }, 00:13:01.286 { 00:13:01.286 "method": "bdev_malloc_create", 00:13:01.286 "params": { 00:13:01.286 "name": "malloc0", 00:13:01.286 "num_blocks": 8192, 00:13:01.286 "block_size": 4096, 00:13:01.286 "physical_block_size": 4096, 00:13:01.286 "uuid": "c50be8c8-dbf6-45c8-833f-32f778a46f48", 00:13:01.286 "optimal_io_boundary": 0, 00:13:01.286 "md_size": 0, 00:13:01.286 "dif_type": 0, 00:13:01.286 "dif_is_head_of_md": false, 00:13:01.286 "dif_pi_format": 0 00:13:01.286 } 00:13:01.286 }, 00:13:01.286 { 00:13:01.286 "method": "bdev_wait_for_examine" 00:13:01.286 } 00:13:01.286 ] 00:13:01.286 }, 00:13:01.287 { 00:13:01.287 "subsystem": "nbd", 00:13:01.287 "config": [] 00:13:01.287 }, 00:13:01.287 { 00:13:01.287 "subsystem": "scheduler", 00:13:01.287 "config": [ 00:13:01.287 { 00:13:01.287 "method": "framework_set_scheduler", 00:13:01.287 "params": { 00:13:01.287 "name": "static" 00:13:01.287 } 00:13:01.287 } 00:13:01.287 ] 00:13:01.287 }, 00:13:01.287 { 00:13:01.287 "subsystem": "nvmf", 00:13:01.287 "config": [ 00:13:01.287 { 00:13:01.287 "method": "nvmf_set_config", 00:13:01.287 "params": { 00:13:01.287 "discovery_filter": "match_any", 00:13:01.287 "admin_cmd_passthru": { 00:13:01.287 "identify_ctrlr": false 00:13:01.287 } 00:13:01.287 } 00:13:01.287 }, 00:13:01.287 { 00:13:01.287 "method": "nvmf_set_max_subsystems", 00:13:01.287 "params": { 00:13:01.287 "max_subsystems": 1024 00:13:01.287 } 00:13:01.287 }, 00:13:01.287 { 00:13:01.287 "method": "nvmf_set_crdt", 00:13:01.287 "params": { 00:13:01.287 "crdt1": 0, 00:13:01.287 "crdt2": 0, 00:13:01.287 "crdt3": 0 00:13:01.287 } 00:13:01.287 }, 00:13:01.287 { 00:13:01.287 "method": "nvmf_create_transport", 00:13:01.287 "params": { 00:13:01.287 "trtype": "TCP", 00:13:01.287 "max_queue_depth": 128, 00:13:01.287 "max_io_qpairs_per_ctrlr": 127, 00:13:01.287 "in_capsule_data_size": 4096, 00:13:01.287 "max_io_size": 131072, 00:13:01.287 "io_unit_size": 131072, 00:13:01.287 "max_aq_depth": 128, 00:13:01.287 "num_shared_buffers": 511, 00:13:01.287 "buf_cache_size": 4294967295, 00:13:01.287 "dif_insert_or_strip": false, 00:13:01.287 "zcopy": false, 00:13:01.287 "c2h_success": false, 00:13:01.287 "sock_priority": 0, 00:13:01.287 "abort_timeout_sec": 1, 00:13:01.287 "ack_timeout": 0, 00:13:01.287 "data_wr_pool_size": 0 00:13:01.287 } 00:13:01.287 }, 00:13:01.287 { 00:13:01.287 "method": "nvmf_create_subsystem", 00:13:01.287 "params": { 00:13:01.287 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:01.287 "allow_any_host": false, 00:13:01.287 "serial_number": "SPDK00000000000001", 00:13:01.287 "model_number": "SPDK bdev Controller", 00:13:01.287 "max_namespaces": 10, 00:13:01.287 "min_cntlid": 1, 00:13:01.287 "max_cntlid": 65519, 00:13:01.287 "ana_reporting": false 00:13:01.287 } 00:13:01.287 }, 00:13:01.287 { 00:13:01.287 "method": "nvmf_subsystem_add_host", 00:13:01.287 "params": { 00:13:01.287 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:01.287 "host": "nqn.2016-06.io.spdk:host1", 00:13:01.287 "psk": "/tmp/tmp.djUwK7BrNP" 00:13:01.287 } 00:13:01.287 }, 00:13:01.287 { 00:13:01.287 "method": "nvmf_subsystem_add_ns", 00:13:01.287 "params": { 00:13:01.287 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:01.287 "namespace": { 00:13:01.287 "nsid": 1, 00:13:01.287 "bdev_name": "malloc0", 00:13:01.287 "nguid": "C50BE8C8DBF645C8833F32F778A46F48", 00:13:01.287 "uuid": "c50be8c8-dbf6-45c8-833f-32f778a46f48", 00:13:01.287 "no_auto_visible": false 00:13:01.287 } 00:13:01.287 } 00:13:01.287 }, 00:13:01.287 { 00:13:01.287 "method": "nvmf_subsystem_add_listener", 00:13:01.287 "params": { 00:13:01.287 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:01.287 "listen_address": { 00:13:01.287 "trtype": "TCP", 00:13:01.287 "adrfam": "IPv4", 00:13:01.287 "traddr": "10.0.0.2", 00:13:01.287 "trsvcid": "4420" 00:13:01.287 }, 00:13:01.287 "secure_channel": true 00:13:01.287 } 00:13:01.287 } 00:13:01.287 ] 00:13:01.287 } 00:13:01.287 ] 00:13:01.287 }' 00:13:01.287 21:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:13:01.546 21:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:13:01.546 "subsystems": [ 00:13:01.546 { 00:13:01.546 "subsystem": "keyring", 00:13:01.546 "config": [] 00:13:01.546 }, 00:13:01.546 { 00:13:01.546 "subsystem": "iobuf", 00:13:01.546 "config": [ 00:13:01.546 { 00:13:01.546 "method": "iobuf_set_options", 00:13:01.546 "params": { 00:13:01.546 "small_pool_count": 8192, 00:13:01.546 "large_pool_count": 1024, 00:13:01.546 "small_bufsize": 8192, 00:13:01.546 "large_bufsize": 135168 00:13:01.546 } 00:13:01.546 } 00:13:01.546 ] 00:13:01.546 }, 00:13:01.546 { 00:13:01.546 "subsystem": "sock", 00:13:01.546 "config": [ 00:13:01.546 { 00:13:01.546 "method": "sock_set_default_impl", 00:13:01.546 "params": { 00:13:01.546 "impl_name": "uring" 00:13:01.546 } 00:13:01.546 }, 00:13:01.546 { 00:13:01.546 "method": "sock_impl_set_options", 00:13:01.546 "params": { 00:13:01.546 "impl_name": "ssl", 00:13:01.546 "recv_buf_size": 4096, 00:13:01.546 "send_buf_size": 4096, 00:13:01.546 "enable_recv_pipe": true, 00:13:01.546 "enable_quickack": false, 00:13:01.547 "enable_placement_id": 0, 00:13:01.547 "enable_zerocopy_send_server": true, 00:13:01.547 "enable_zerocopy_send_client": false, 00:13:01.547 "zerocopy_threshold": 0, 00:13:01.547 "tls_version": 0, 00:13:01.547 "enable_ktls": false 00:13:01.547 } 00:13:01.547 }, 00:13:01.547 { 00:13:01.547 "method": "sock_impl_set_options", 00:13:01.547 "params": { 00:13:01.547 "impl_name": "posix", 00:13:01.547 "recv_buf_size": 2097152, 00:13:01.547 "send_buf_size": 2097152, 00:13:01.547 "enable_recv_pipe": true, 00:13:01.547 "enable_quickack": false, 00:13:01.547 "enable_placement_id": 0, 00:13:01.547 "enable_zerocopy_send_server": true, 00:13:01.547 "enable_zerocopy_send_client": false, 00:13:01.547 "zerocopy_threshold": 0, 00:13:01.547 "tls_version": 0, 00:13:01.547 "enable_ktls": false 00:13:01.547 } 00:13:01.547 }, 00:13:01.547 { 00:13:01.547 "method": "sock_impl_set_options", 00:13:01.547 "params": { 00:13:01.547 "impl_name": "uring", 00:13:01.547 "recv_buf_size": 2097152, 00:13:01.547 "send_buf_size": 2097152, 00:13:01.547 "enable_recv_pipe": true, 00:13:01.547 "enable_quickack": false, 00:13:01.547 "enable_placement_id": 0, 00:13:01.547 "enable_zerocopy_send_server": false, 00:13:01.547 "enable_zerocopy_send_client": false, 00:13:01.547 "zerocopy_threshold": 0, 00:13:01.547 "tls_version": 0, 00:13:01.547 "enable_ktls": false 00:13:01.547 } 00:13:01.547 } 00:13:01.547 ] 00:13:01.547 }, 00:13:01.547 { 00:13:01.547 "subsystem": "vmd", 00:13:01.547 "config": [] 00:13:01.547 }, 00:13:01.547 { 00:13:01.547 "subsystem": "accel", 00:13:01.547 "config": [ 00:13:01.547 { 00:13:01.547 "method": "accel_set_options", 00:13:01.547 "params": { 00:13:01.547 "small_cache_size": 128, 00:13:01.547 "large_cache_size": 16, 00:13:01.547 "task_count": 2048, 00:13:01.547 "sequence_count": 2048, 00:13:01.547 "buf_count": 2048 00:13:01.547 } 00:13:01.547 } 00:13:01.547 ] 00:13:01.547 }, 00:13:01.547 { 00:13:01.547 "subsystem": "bdev", 00:13:01.547 "config": [ 00:13:01.547 { 00:13:01.547 "method": "bdev_set_options", 00:13:01.547 "params": { 00:13:01.547 "bdev_io_pool_size": 65535, 00:13:01.547 "bdev_io_cache_size": 256, 00:13:01.547 "bdev_auto_examine": true, 00:13:01.547 "iobuf_small_cache_size": 128, 00:13:01.547 "iobuf_large_cache_size": 16 00:13:01.547 } 00:13:01.547 }, 00:13:01.547 { 00:13:01.547 "method": "bdev_raid_set_options", 00:13:01.547 "params": { 00:13:01.547 "process_window_size_kb": 1024, 00:13:01.547 "process_max_bandwidth_mb_sec": 0 00:13:01.547 } 00:13:01.547 }, 00:13:01.547 { 00:13:01.547 "method": "bdev_iscsi_set_options", 00:13:01.547 "params": { 00:13:01.547 "timeout_sec": 30 00:13:01.547 } 00:13:01.547 }, 00:13:01.547 { 00:13:01.547 "method": "bdev_nvme_set_options", 00:13:01.547 "params": { 00:13:01.547 "action_on_timeout": "none", 00:13:01.547 "timeout_us": 0, 00:13:01.547 "timeout_admin_us": 0, 00:13:01.547 "keep_alive_timeout_ms": 10000, 00:13:01.547 "arbitration_burst": 0, 00:13:01.547 "low_priority_weight": 0, 00:13:01.547 "medium_priority_weight": 0, 00:13:01.547 "high_priority_weight": 0, 00:13:01.547 "nvme_adminq_poll_period_us": 10000, 00:13:01.547 "nvme_ioq_poll_period_us": 0, 00:13:01.547 "io_queue_requests": 512, 00:13:01.547 "delay_cmd_submit": true, 00:13:01.547 "transport_retry_count": 4, 00:13:01.547 "bdev_retry_count": 3, 00:13:01.547 "transport_ack_timeout": 0, 00:13:01.547 "ctrlr_loss_timeout_sec": 0, 00:13:01.547 "reconnect_delay_sec": 0, 00:13:01.547 "fast_io_fail_timeout_sec": 0, 00:13:01.547 "disable_auto_failback": false, 00:13:01.547 "generate_uuids": false, 00:13:01.547 "transport_tos": 0, 00:13:01.547 "nvme_error_stat": false, 00:13:01.547 "rdma_srq_size": 0, 00:13:01.547 "io_path_stat": false, 00:13:01.547 "allow_accel_sequence": false, 00:13:01.547 "rdma_max_cq_size": 0, 00:13:01.547 "rdma_cm_event_timeout_ms": 0, 00:13:01.547 "dhchap_digests": [ 00:13:01.547 "sha256", 00:13:01.547 "sha384", 00:13:01.547 "sha512" 00:13:01.547 ], 00:13:01.547 "dhchap_dhgroups": [ 00:13:01.547 "null", 00:13:01.547 "ffdhe2048", 00:13:01.547 "ffdhe3072", 00:13:01.547 "ffdhe4096", 00:13:01.547 "ffdhe6144", 00:13:01.547 "ffdhe8192" 00:13:01.547 ] 00:13:01.547 } 00:13:01.547 }, 00:13:01.547 { 00:13:01.547 "method": "bdev_nvme_attach_controller", 00:13:01.547 "params": { 00:13:01.547 "name": "TLSTEST", 00:13:01.547 "trtype": "TCP", 00:13:01.547 "adrfam": "IPv4", 00:13:01.547 "traddr": "10.0.0.2", 00:13:01.547 "trsvcid": "4420", 00:13:01.547 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:01.547 "prchk_reftag": false, 00:13:01.547 "prchk_guard": false, 00:13:01.547 "ctrlr_loss_timeout_sec": 0, 00:13:01.547 "reconnect_delay_sec": 0, 00:13:01.547 "fast_io_fail_timeout_sec": 0, 00:13:01.547 "psk": "/tmp/tmp.djUwK7BrNP", 00:13:01.547 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:01.547 "hdgst": false, 00:13:01.547 "ddgst": false 00:13:01.547 } 00:13:01.547 }, 00:13:01.547 { 00:13:01.547 "method": "bdev_nvme_set_hotplug", 00:13:01.547 "params": { 00:13:01.547 "period_us": 100000, 00:13:01.547 "enable": false 00:13:01.547 } 00:13:01.547 }, 00:13:01.547 { 00:13:01.547 "method": "bdev_wait_for_examine" 00:13:01.547 } 00:13:01.547 ] 00:13:01.547 }, 00:13:01.547 { 00:13:01.547 "subsystem": "nbd", 00:13:01.547 "config": [] 00:13:01.547 } 00:13:01.547 ] 00:13:01.547 }' 00:13:01.547 21:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # killprocess 72632 00:13:01.547 21:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72632 ']' 00:13:01.547 21:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72632 00:13:01.547 21:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:13:01.547 21:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:01.547 21:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72632 00:13:01.547 killing process with pid 72632 00:13:01.547 Received shutdown signal, test time was about 10.000000 seconds 00:13:01.547 00:13:01.547 Latency(us) 00:13:01.547 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:01.547 =================================================================================================================== 00:13:01.547 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:01.547 21:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:13:01.547 21:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:13:01.547 21:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72632' 00:13:01.547 21:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72632 00:13:01.547 [2024-07-24 21:34:46.544418] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:13:01.547 21:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72632 00:13:01.807 21:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@200 -- # killprocess 72583 00:13:01.807 21:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72583 ']' 00:13:01.807 21:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72583 00:13:01.807 21:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:13:01.807 21:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:01.807 21:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72583 00:13:01.807 killing process with pid 72583 00:13:01.807 21:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:13:01.807 21:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:13:01.807 21:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72583' 00:13:01.807 21:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72583 00:13:01.807 [2024-07-24 21:34:46.772493] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:13:01.807 21:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72583 00:13:02.066 21:34:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:13:02.066 21:34:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:02.066 21:34:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:02.066 21:34:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:13:02.066 "subsystems": [ 00:13:02.066 { 00:13:02.066 "subsystem": "keyring", 00:13:02.066 "config": [] 00:13:02.066 }, 00:13:02.066 { 00:13:02.066 "subsystem": "iobuf", 00:13:02.066 "config": [ 00:13:02.066 { 00:13:02.066 "method": "iobuf_set_options", 00:13:02.066 "params": { 00:13:02.066 "small_pool_count": 8192, 00:13:02.066 "large_pool_count": 1024, 00:13:02.066 "small_bufsize": 8192, 00:13:02.066 "large_bufsize": 135168 00:13:02.066 } 00:13:02.066 } 00:13:02.066 ] 00:13:02.066 }, 00:13:02.066 { 00:13:02.066 "subsystem": "sock", 00:13:02.066 "config": [ 00:13:02.066 { 00:13:02.066 "method": "sock_set_default_impl", 00:13:02.066 "params": { 00:13:02.066 "impl_name": "uring" 00:13:02.066 } 00:13:02.066 }, 00:13:02.066 { 00:13:02.066 "method": "sock_impl_set_options", 00:13:02.066 "params": { 00:13:02.066 "impl_name": "ssl", 00:13:02.066 "recv_buf_size": 4096, 00:13:02.066 "send_buf_size": 4096, 00:13:02.066 "enable_recv_pipe": true, 00:13:02.066 "enable_quickack": false, 00:13:02.066 "enable_placement_id": 0, 00:13:02.066 "enable_zerocopy_send_server": true, 00:13:02.066 "enable_zerocopy_send_client": false, 00:13:02.066 "zerocopy_threshold": 0, 00:13:02.066 "tls_version": 0, 00:13:02.066 "enable_ktls": false 00:13:02.066 } 00:13:02.066 }, 00:13:02.066 { 00:13:02.066 "method": "sock_impl_set_options", 00:13:02.066 "params": { 00:13:02.066 "impl_name": "posix", 00:13:02.066 "recv_buf_size": 2097152, 00:13:02.066 "send_buf_size": 2097152, 00:13:02.066 "enable_recv_pipe": true, 00:13:02.066 "enable_quickack": false, 00:13:02.066 "enable_placement_id": 0, 00:13:02.066 "enable_zerocopy_send_server": true, 00:13:02.066 "enable_zerocopy_send_client": false, 00:13:02.066 "zerocopy_threshold": 0, 00:13:02.066 "tls_version": 0, 00:13:02.066 "enable_ktls": false 00:13:02.066 } 00:13:02.066 }, 00:13:02.066 { 00:13:02.066 "method": "sock_impl_set_options", 00:13:02.066 "params": { 00:13:02.066 "impl_name": "uring", 00:13:02.066 "recv_buf_size": 2097152, 00:13:02.066 "send_buf_size": 2097152, 00:13:02.066 "enable_recv_pipe": true, 00:13:02.066 "enable_quickack": false, 00:13:02.066 "enable_placement_id": 0, 00:13:02.066 "enable_zerocopy_send_server": false, 00:13:02.066 "enable_zerocopy_send_client": false, 00:13:02.066 "zerocopy_threshold": 0, 00:13:02.066 "tls_version": 0, 00:13:02.066 "enable_ktls": false 00:13:02.066 } 00:13:02.066 } 00:13:02.066 ] 00:13:02.066 }, 00:13:02.066 { 00:13:02.066 "subsystem": "vmd", 00:13:02.066 "config": [] 00:13:02.066 }, 00:13:02.066 { 00:13:02.066 "subsystem": "accel", 00:13:02.066 "config": [ 00:13:02.066 { 00:13:02.066 "method": "accel_set_options", 00:13:02.066 "params": { 00:13:02.066 "small_cache_size": 128, 00:13:02.066 "large_cache_size": 16, 00:13:02.066 "task_count": 2048, 00:13:02.066 "sequence_count": 2048, 00:13:02.066 "buf_count": 2048 00:13:02.066 } 00:13:02.066 } 00:13:02.066 ] 00:13:02.066 }, 00:13:02.066 { 00:13:02.066 "subsystem": "bdev", 00:13:02.066 "config": [ 00:13:02.066 { 00:13:02.066 "method": "bdev_set_options", 00:13:02.066 "params": { 00:13:02.066 "bdev_io_pool_size": 65535, 00:13:02.066 "bdev_io_cache_size": 256, 00:13:02.066 "bdev_auto_examine": true, 00:13:02.066 "iobuf_small_cache_size": 128, 00:13:02.066 "iobuf_large_cache_size": 16 00:13:02.066 } 00:13:02.066 }, 00:13:02.066 { 00:13:02.066 "method": "bdev_raid_set_options", 00:13:02.066 "params": { 00:13:02.066 "process_window_size_kb": 1024, 00:13:02.066 "process_max_bandwidth_mb_sec": 0 00:13:02.066 } 00:13:02.066 }, 00:13:02.066 { 00:13:02.066 "method": "bdev_iscsi_set_options", 00:13:02.066 "params": { 00:13:02.066 "timeout_sec": 30 00:13:02.066 } 00:13:02.066 }, 00:13:02.066 { 00:13:02.066 "method": "bdev_nvme_set_options", 00:13:02.066 "params": { 00:13:02.066 "action_on_timeout": "none", 00:13:02.066 "timeout_us": 0, 00:13:02.066 "timeout_admin_us": 0, 00:13:02.066 "keep_alive_timeout_ms": 10000, 00:13:02.066 "arbitration_burst": 0, 00:13:02.066 "low_priority_weight": 0, 00:13:02.066 "medium_priority_weight": 0, 00:13:02.066 "high_priority_weight": 0, 00:13:02.066 "nvme_adminq_poll_period_us": 10000, 00:13:02.066 "nvme_ioq_poll_period_us": 0, 00:13:02.066 "io_queue_requests": 0, 00:13:02.066 "delay_cmd_submit": true, 00:13:02.066 "transport_retry_count": 4, 00:13:02.066 "bdev_retry_count": 3, 00:13:02.066 "transport_ack_timeout": 0, 00:13:02.066 "ctrlr_loss_timeout_sec": 0, 00:13:02.066 "reconnect_delay_sec": 0, 00:13:02.066 "fast_io_fail_timeout_sec": 0, 00:13:02.066 "disable_auto_failback": false, 00:13:02.066 "generate_uuids": false, 00:13:02.066 "transport_tos": 0, 00:13:02.066 "nvme_error_stat": false, 00:13:02.066 "rdma_srq_size": 0, 00:13:02.066 "io_path_stat": false, 00:13:02.066 "allow_accel_sequence": false, 00:13:02.066 "rdma_max_cq_size": 0, 00:13:02.066 "rdma_cm_event_timeout_ms": 0, 00:13:02.066 "dhchap_digests": [ 00:13:02.066 "sha256", 00:13:02.066 "sha384", 00:13:02.066 "sha512" 00:13:02.066 ], 00:13:02.066 "dhchap_dhgroups": [ 00:13:02.066 "null", 00:13:02.066 "ffdhe2048", 00:13:02.066 "ffdhe3072", 00:13:02.066 "ffdhe4096", 00:13:02.066 "ffdhe6144", 00:13:02.066 "ffdhe8192" 00:13:02.066 ] 00:13:02.066 } 00:13:02.066 }, 00:13:02.066 { 00:13:02.066 "method": "bdev_nvme_set_hotplug", 00:13:02.066 "params": { 00:13:02.066 "period_us": 100000, 00:13:02.066 "enable": false 00:13:02.066 } 00:13:02.066 }, 00:13:02.066 { 00:13:02.066 "method": "bdev_malloc_create", 00:13:02.066 "params": { 00:13:02.066 "name": "malloc0", 00:13:02.066 "num_blocks": 8192, 00:13:02.066 "block_size": 4096, 00:13:02.066 "physical_block_size": 4096, 00:13:02.066 "uuid": "c50be8c8-dbf6-45c8-833f-32f778a46f48", 00:13:02.066 "optimal_io_boundary": 0, 00:13:02.066 "md_size": 0, 00:13:02.066 "dif_type": 0, 00:13:02.066 "dif_is_head_of_md": false, 00:13:02.066 "dif_pi_format": 0 00:13:02.066 } 00:13:02.066 }, 00:13:02.066 { 00:13:02.066 "method": "bdev_wait_for_examine" 00:13:02.066 } 00:13:02.066 ] 00:13:02.066 }, 00:13:02.066 { 00:13:02.066 "subsystem": "nbd", 00:13:02.066 "config": [] 00:13:02.066 }, 00:13:02.066 { 00:13:02.066 "subsystem": "scheduler", 00:13:02.066 "config": [ 00:13:02.066 { 00:13:02.066 "method": "framework_set_scheduler", 00:13:02.066 "params": { 00:13:02.066 "name": "static" 00:13:02.066 } 00:13:02.066 } 00:13:02.066 ] 00:13:02.066 }, 00:13:02.066 { 00:13:02.066 "subsystem": "nvmf", 00:13:02.066 "config": [ 00:13:02.066 { 00:13:02.066 "method": "nvmf_set_config", 00:13:02.066 "params": { 00:13:02.066 "discovery_filter": "match_any", 00:13:02.066 "admin_cmd_passthru": { 00:13:02.066 "identify_ctrlr": false 00:13:02.066 } 00:13:02.066 } 00:13:02.066 }, 00:13:02.066 { 00:13:02.066 "method": "nvmf_set_max_subsystems", 00:13:02.066 "params": { 00:13:02.066 "max_subsystems": 1024 00:13:02.066 } 00:13:02.066 }, 00:13:02.066 { 00:13:02.066 "method": "nvmf_set_crdt", 00:13:02.066 "params": { 00:13:02.066 "crdt1": 0, 00:13:02.066 "crdt2": 0, 00:13:02.066 "crdt3": 0 00:13:02.066 } 00:13:02.066 }, 00:13:02.066 { 00:13:02.066 "method": "nvmf_create_transport", 00:13:02.066 "params": { 00:13:02.066 "trtype": "TCP", 00:13:02.066 "max_queue_depth": 128, 00:13:02.066 "max_io_qpairs_per_ctrlr": 127, 00:13:02.066 "in_capsule_data_size": 4096, 00:13:02.066 "max_io_size": 131072, 00:13:02.066 "io_unit_size": 131072, 00:13:02.066 "max_aq_depth": 128, 00:13:02.066 "num_shared_buffers": 511, 00:13:02.066 "buf_cache_size": 4294967295, 00:13:02.066 "dif_insert_or_strip": false, 00:13:02.066 "zcopy": false, 00:13:02.066 "c2h_success": false, 00:13:02.066 "sock_priority": 0, 00:13:02.066 "abort_timeout_sec": 1, 00:13:02.066 "ack_timeout": 0, 00:13:02.066 "data_wr_pool_size": 0 00:13:02.066 } 00:13:02.066 }, 00:13:02.066 { 00:13:02.066 "method": "nvmf_create_subsystem", 00:13:02.066 "params": { 00:13:02.066 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:02.066 "allow_any_host": false, 00:13:02.066 "serial_number": "SPDK00000000000001", 00:13:02.066 "model_number": "SPDK bdev Controller", 00:13:02.066 "max_namespaces": 10, 00:13:02.066 "min_cntlid": 1, 00:13:02.066 "max_cntlid": 65519, 00:13:02.066 "ana_reporting": false 00:13:02.066 } 00:13:02.066 }, 00:13:02.066 { 00:13:02.066 "method": "nvmf_subsystem_add_host", 00:13:02.066 "params": { 00:13:02.066 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:02.067 "host": "nqn.2016-06.io.spdk:host1", 00:13:02.067 "psk": "/tmp/tmp.djUwK7BrNP" 00:13:02.067 } 00:13:02.067 }, 00:13:02.067 { 00:13:02.067 "method": "nvmf_subsystem_add_ns", 00:13:02.067 "params": { 00:13:02.067 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:02.067 "namespace": { 00:13:02.067 "nsid": 1, 00:13:02.067 "bdev_name": "malloc0", 00:13:02.067 "nguid": "C50BE8C8DBF645C8833F32F778A46F48", 00:13:02.067 "uuid": "c50be8c8-dbf6-45c8-833f-32f778a46f48", 00:13:02.067 "no_auto_visible": false 00:13:02.067 } 00:13:02.067 } 00:13:02.067 }, 00:13:02.067 { 00:13:02.067 "method": "nvmf_subsystem_add_listener", 00:13:02.067 "params": { 00:13:02.067 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:02.067 "listen_address": { 00:13:02.067 "trtype": "TCP", 00:13:02.067 "adrfam": "IPv4", 00:13:02.067 "traddr": "10.0.0.2", 00:13:02.067 "trsvcid": "4420" 00:13:02.067 }, 00:13:02.067 "secure_channel": true 00:13:02.067 } 00:13:02.067 } 00:13:02.067 ] 00:13:02.067 } 00:13:02.067 ] 00:13:02.067 }' 00:13:02.067 21:34:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:02.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:02.067 21:34:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=72667 00:13:02.067 21:34:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:13:02.067 21:34:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 72667 00:13:02.067 21:34:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72667 ']' 00:13:02.067 21:34:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:02.067 21:34:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:02.067 21:34:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:02.067 21:34:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:02.067 21:34:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:02.325 [2024-07-24 21:34:47.102766] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:13:02.325 [2024-07-24 21:34:47.103081] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:02.325 [2024-07-24 21:34:47.241611] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:02.325 [2024-07-24 21:34:47.314632] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:02.325 [2024-07-24 21:34:47.315014] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:02.325 [2024-07-24 21:34:47.315147] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:02.325 [2024-07-24 21:34:47.315271] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:02.325 [2024-07-24 21:34:47.315309] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:02.325 [2024-07-24 21:34:47.315486] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:02.583 [2024-07-24 21:34:47.498014] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:02.583 [2024-07-24 21:34:47.575199] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:02.841 [2024-07-24 21:34:47.591098] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:13:02.841 [2024-07-24 21:34:47.607117] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:02.841 [2024-07-24 21:34:47.617973] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:03.100 21:34:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:03.100 21:34:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:13:03.100 21:34:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:03.100 21:34:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:03.100 21:34:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:03.100 21:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:03.100 21:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=72699 00:13:03.100 21:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 72699 /var/tmp/bdevperf.sock 00:13:03.100 21:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72699 ']' 00:13:03.100 21:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:03.100 21:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:13:03.100 21:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:13:03.100 "subsystems": [ 00:13:03.100 { 00:13:03.100 "subsystem": "keyring", 00:13:03.100 "config": [] 00:13:03.100 }, 00:13:03.100 { 00:13:03.100 "subsystem": "iobuf", 00:13:03.100 "config": [ 00:13:03.100 { 00:13:03.100 "method": "iobuf_set_options", 00:13:03.100 "params": { 00:13:03.100 "small_pool_count": 8192, 00:13:03.100 "large_pool_count": 1024, 00:13:03.100 "small_bufsize": 8192, 00:13:03.100 "large_bufsize": 135168 00:13:03.100 } 00:13:03.100 } 00:13:03.100 ] 00:13:03.100 }, 00:13:03.100 { 00:13:03.100 "subsystem": "sock", 00:13:03.100 "config": [ 00:13:03.100 { 00:13:03.100 "method": "sock_set_default_impl", 00:13:03.100 "params": { 00:13:03.100 "impl_name": "uring" 00:13:03.100 } 00:13:03.100 }, 00:13:03.100 { 00:13:03.100 "method": "sock_impl_set_options", 00:13:03.100 "params": { 00:13:03.100 "impl_name": "ssl", 00:13:03.100 "recv_buf_size": 4096, 00:13:03.100 "send_buf_size": 4096, 00:13:03.100 "enable_recv_pipe": true, 00:13:03.100 "enable_quickack": false, 00:13:03.100 "enable_placement_id": 0, 00:13:03.100 "enable_zerocopy_send_server": true, 00:13:03.100 "enable_zerocopy_send_client": false, 00:13:03.100 "zerocopy_threshold": 0, 00:13:03.100 "tls_version": 0, 00:13:03.100 "enable_ktls": false 00:13:03.100 } 00:13:03.100 }, 00:13:03.100 { 00:13:03.100 "method": "sock_impl_set_options", 00:13:03.100 "params": { 00:13:03.100 "impl_name": "posix", 00:13:03.100 "recv_buf_size": 2097152, 00:13:03.100 "send_buf_size": 2097152, 00:13:03.100 "enable_recv_pipe": true, 00:13:03.100 "enable_quickack": false, 00:13:03.100 "enable_placement_id": 0, 00:13:03.100 "enable_zerocopy_send_server": true, 00:13:03.100 "enable_zerocopy_send_client": false, 00:13:03.100 "zerocopy_threshold": 0, 00:13:03.100 "tls_version": 0, 00:13:03.100 "enable_ktls": false 00:13:03.100 } 00:13:03.100 }, 00:13:03.100 { 00:13:03.100 "method": "sock_impl_set_options", 00:13:03.100 "params": { 00:13:03.100 "impl_name": "uring", 00:13:03.100 "recv_buf_size": 2097152, 00:13:03.100 "send_buf_size": 2097152, 00:13:03.100 "enable_recv_pipe": true, 00:13:03.100 "enable_quickack": false, 00:13:03.100 "enable_placement_id": 0, 00:13:03.100 "enable_zerocopy_send_server": false, 00:13:03.100 "enable_zerocopy_send_client": false, 00:13:03.100 "zerocopy_threshold": 0, 00:13:03.100 "tls_version": 0, 00:13:03.100 "enable_ktls": false 00:13:03.100 } 00:13:03.100 } 00:13:03.100 ] 00:13:03.100 }, 00:13:03.100 { 00:13:03.100 "subsystem": "vmd", 00:13:03.100 "config": [] 00:13:03.100 }, 00:13:03.100 { 00:13:03.100 "subsystem": "accel", 00:13:03.100 "config": [ 00:13:03.100 { 00:13:03.100 "method": "accel_set_options", 00:13:03.100 "params": { 00:13:03.100 "small_cache_size": 128, 00:13:03.100 "large_cache_size": 16, 00:13:03.100 "task_count": 2048, 00:13:03.100 "sequence_count": 2048, 00:13:03.100 "buf_count": 2048 00:13:03.100 } 00:13:03.100 } 00:13:03.100 ] 00:13:03.100 }, 00:13:03.100 { 00:13:03.100 "subsystem": "bdev", 00:13:03.100 "config": [ 00:13:03.100 { 00:13:03.100 "method": "bdev_set_options", 00:13:03.100 "params": { 00:13:03.100 "bdev_io_pool_size": 65535, 00:13:03.100 "bdev_io_cache_size": 256, 00:13:03.100 "bdev_auto_examine": true, 00:13:03.100 "iobuf_small_cache_size": 128, 00:13:03.100 "iobuf_large_cache_size": 16 00:13:03.100 } 00:13:03.100 }, 00:13:03.100 { 00:13:03.100 "method": "bdev_raid_set_options", 00:13:03.100 "params": { 00:13:03.100 "process_window_size_kb": 1024, 00:13:03.100 "process_max_bandwidth_mb_sec": 0 00:13:03.100 } 00:13:03.100 }, 00:13:03.100 { 00:13:03.100 "method": "bdev_iscsi_set_options", 00:13:03.100 "params": { 00:13:03.100 "timeout_sec": 30 00:13:03.100 } 00:13:03.100 }, 00:13:03.100 { 00:13:03.100 "method": "bdev_nvme_set_options", 00:13:03.100 "params": { 00:13:03.100 "action_on_timeout": "none", 00:13:03.100 "timeout_us": 0, 00:13:03.100 "timeout_admin_us": 0, 00:13:03.100 "keep_alive_timeout_ms": 10000, 00:13:03.100 "arbitration_burst": 0, 00:13:03.100 "low_priority_weight": 0, 00:13:03.100 "medium_priority_weight": 0, 00:13:03.100 "high_priority_weight": 0, 00:13:03.100 "nvme_adminq_poll_period_us": 10000, 00:13:03.100 "nvme_ioq_poll_period_us": 0, 00:13:03.100 "io_queue_requests": 512, 00:13:03.100 "delay_cmd_submit": true, 00:13:03.100 "transport_retry_count": 4, 00:13:03.100 "bdev_retry_count": 3, 00:13:03.101 "transport_ack_timeout": 0, 00:13:03.101 "ctrlr_loss_timeout_sec": 0, 00:13:03.101 "reconnect_delay_sec": 0, 00:13:03.101 "fast_io_fail_timeout_sec": 0, 00:13:03.101 "disable_auto_failback": false, 00:13:03.101 "generate_uuids": false, 00:13:03.101 "transport_tos": 0, 00:13:03.101 "nvme_error_stat": false, 00:13:03.101 "rdma_srq_size": 0, 00:13:03.101 "io_path_stat": false, 00:13:03.101 "allow_accel_sequence": false, 00:13:03.101 "rdma_max_cq_size": 0, 00:13:03.101 "rdma_cm_event_timeout_ms": 0, 00:13:03.101 "dhchap_digests": [ 00:13:03.101 "sha256", 00:13:03.101 "sha384", 00:13:03.101 "sha512" 00:13:03.101 ], 00:13:03.101 "dhchap_dhgroups": [ 00:13:03.101 "null", 00:13:03.101 "ffdhe2048", 00:13:03.101 "ffdhe3072", 00:13:03.101 "ffdhe4096", 00:13:03.101 "ffdhe6144", 00:13:03.101 "ffdhe8192" 00:13:03.101 ] 00:13:03.101 } 00:13:03.101 }, 00:13:03.101 { 00:13:03.101 "method": "bdev_nvme_attach_controller", 00:13:03.101 "params": { 00:13:03.101 "name": "TLSTEST", 00:13:03.101 "trtype": "TCP", 00:13:03.101 "adrfam": "IPv4", 00:13:03.101 "traddr": "10.0.0.2", 00:13:03.101 "trsvcid": "4420", 00:13:03.101 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:03.101 "prchk_reftag": false, 00:13:03.101 "prchk_guard": false, 00:13:03.101 "ctrlr_loss_timeout_sec": 0, 00:13:03.101 "reconnect_delay_sec": 0, 00:13:03.101 "fast_io_fail_timeout_sec": 0, 00:13:03.101 "psk": "/tmp/tmp.djUwK7BrNP", 00:13:03.101 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:03.101 "hdgst": false, 00:13:03.101 "ddgst": false 00:13:03.101 } 00:13:03.101 }, 00:13:03.101 { 00:13:03.101 "method": "bdev_nvme_set_hotplug", 00:13:03.101 "params": { 00:13:03.101 "period_us": 100000, 00:13:03.101 "enable": false 00:13:03.101 } 00:13:03.101 }, 00:13:03.101 { 00:13:03.101 "method": "bdev_wait_for_examine" 00:13:03.101 } 00:13:03.101 ] 00:13:03.101 }, 00:13:03.101 { 00:13:03.101 "subsystem": "nbd", 00:13:03.101 "config": [] 00:13:03.101 } 00:13:03.101 ] 00:13:03.101 }' 00:13:03.101 21:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:03.101 21:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:03.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:03.101 21:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:03.101 21:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:03.101 [2024-07-24 21:34:48.054927] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:13:03.101 [2024-07-24 21:34:48.055207] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72699 ] 00:13:03.359 [2024-07-24 21:34:48.190090] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:03.359 [2024-07-24 21:34:48.293586] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:03.618 [2024-07-24 21:34:48.424023] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:03.618 [2024-07-24 21:34:48.457396] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:03.618 [2024-07-24 21:34:48.457832] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:13:04.184 21:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:04.184 21:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:13:04.184 21:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@211 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:13:04.184 Running I/O for 10 seconds... 00:13:14.208 00:13:14.208 Latency(us) 00:13:14.208 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:14.208 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:13:14.208 Verification LBA range: start 0x0 length 0x2000 00:13:14.208 TLSTESTn1 : 10.01 4646.61 18.15 0.00 0.00 27498.95 6255.71 22282.24 00:13:14.208 =================================================================================================================== 00:13:14.208 Total : 4646.61 18.15 0.00 0.00 27498.95 6255.71 22282.24 00:13:14.208 0 00:13:14.208 21:34:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:14.208 21:34:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@214 -- # killprocess 72699 00:13:14.208 21:34:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72699 ']' 00:13:14.208 21:34:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72699 00:13:14.208 21:34:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:13:14.208 21:34:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:14.208 21:34:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72699 00:13:14.208 killing process with pid 72699 00:13:14.208 Received shutdown signal, test time was about 10.000000 seconds 00:13:14.208 00:13:14.208 Latency(us) 00:13:14.208 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:14.208 =================================================================================================================== 00:13:14.208 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:14.208 21:34:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:13:14.208 21:34:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:13:14.208 21:34:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72699' 00:13:14.208 21:34:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72699 00:13:14.208 [2024-07-24 21:34:59.133792] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:13:14.208 21:34:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72699 00:13:14.467 21:34:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # killprocess 72667 00:13:14.467 21:34:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72667 ']' 00:13:14.467 21:34:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72667 00:13:14.467 21:34:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:13:14.467 21:34:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:14.467 21:34:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72667 00:13:14.467 killing process with pid 72667 00:13:14.467 21:34:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:13:14.467 21:34:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:13:14.467 21:34:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72667' 00:13:14.467 21:34:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72667 00:13:14.467 [2024-07-24 21:34:59.363916] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:13:14.467 21:34:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72667 00:13:14.725 21:34:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:13:14.725 21:34:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:14.725 21:34:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:14.725 21:34:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:14.725 21:34:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:13:14.725 21:34:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=72838 00:13:14.725 21:34:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 72838 00:13:14.725 21:34:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72838 ']' 00:13:14.725 21:34:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:14.725 21:34:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:14.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:14.725 21:34:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:14.725 21:34:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:14.725 21:34:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:14.725 [2024-07-24 21:34:59.694588] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:13:14.725 [2024-07-24 21:34:59.694897] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:14.984 [2024-07-24 21:34:59.828784] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:14.984 [2024-07-24 21:34:59.906592] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:14.984 [2024-07-24 21:34:59.906699] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:14.984 [2024-07-24 21:34:59.906726] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:14.984 [2024-07-24 21:34:59.906735] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:14.984 [2024-07-24 21:34:59.906741] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:14.984 [2024-07-24 21:34:59.906767] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:14.984 [2024-07-24 21:34:59.957819] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:15.920 21:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:15.920 21:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:13:15.920 21:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:15.920 21:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:15.920 21:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:15.920 21:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:15.920 21:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.djUwK7BrNP 00:13:15.920 21:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.djUwK7BrNP 00:13:15.920 21:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:16.179 [2024-07-24 21:35:00.960565] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:16.179 21:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:16.179 21:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:13:16.438 [2024-07-24 21:35:01.340612] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:16.438 [2024-07-24 21:35:01.340797] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:16.438 21:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:16.698 malloc0 00:13:16.698 21:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:16.956 21:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.djUwK7BrNP 00:13:16.957 [2024-07-24 21:35:01.919250] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:13:16.957 21:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:13:16.957 21:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=72887 00:13:16.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:16.957 21:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:16.957 21:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 72887 /var/tmp/bdevperf.sock 00:13:16.957 21:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72887 ']' 00:13:16.957 21:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:16.957 21:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:16.957 21:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:16.957 21:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:16.957 21:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:17.215 [2024-07-24 21:35:01.992457] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:13:17.215 [2024-07-24 21:35:01.992750] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72887 ] 00:13:17.215 [2024-07-24 21:35:02.129489] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:17.215 [2024-07-24 21:35:02.213450] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:17.474 [2024-07-24 21:35:02.282696] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:18.042 21:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:18.042 21:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:13:18.042 21:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.djUwK7BrNP 00:13:18.300 21:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@228 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:13:18.559 [2024-07-24 21:35:03.421327] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:18.559 nvme0n1 00:13:18.559 21:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@232 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:18.817 Running I/O for 1 seconds... 00:13:19.751 00:13:19.751 Latency(us) 00:13:19.751 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:19.751 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:19.751 Verification LBA range: start 0x0 length 0x2000 00:13:19.751 nvme0n1 : 1.02 4439.76 17.34 0.00 0.00 28459.34 5302.46 21090.68 00:13:19.751 =================================================================================================================== 00:13:19.751 Total : 4439.76 17.34 0.00 0.00 28459.34 5302.46 21090.68 00:13:19.751 0 00:13:19.751 21:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # killprocess 72887 00:13:19.751 21:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72887 ']' 00:13:19.751 21:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72887 00:13:19.751 21:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:13:19.751 21:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:19.751 21:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72887 00:13:19.751 killing process with pid 72887 00:13:19.751 Received shutdown signal, test time was about 1.000000 seconds 00:13:19.751 00:13:19.751 Latency(us) 00:13:19.751 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:19.751 =================================================================================================================== 00:13:19.751 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:19.751 21:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:13:19.751 21:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:13:19.751 21:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72887' 00:13:19.751 21:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72887 00:13:19.751 21:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72887 00:13:20.010 21:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@235 -- # killprocess 72838 00:13:20.010 21:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72838 ']' 00:13:20.010 21:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72838 00:13:20.010 21:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:13:20.010 21:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:20.010 21:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72838 00:13:20.010 killing process with pid 72838 00:13:20.010 21:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:20.010 21:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:20.010 21:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72838' 00:13:20.010 21:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72838 00:13:20.010 [2024-07-24 21:35:04.939795] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:13:20.010 21:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72838 00:13:20.268 21:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@240 -- # nvmfappstart 00:13:20.268 21:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:20.268 21:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:20.268 21:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:20.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:20.268 21:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=72938 00:13:20.268 21:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:13:20.268 21:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 72938 00:13:20.268 21:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72938 ']' 00:13:20.268 21:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:20.268 21:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:20.269 21:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:20.269 21:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:20.269 21:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:20.269 [2024-07-24 21:35:05.205611] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:13:20.269 [2024-07-24 21:35:05.205722] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:20.526 [2024-07-24 21:35:05.343607] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:20.526 [2024-07-24 21:35:05.422477] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:20.526 [2024-07-24 21:35:05.422843] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:20.526 [2024-07-24 21:35:05.422995] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:20.526 [2024-07-24 21:35:05.423122] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:20.526 [2024-07-24 21:35:05.423154] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:20.526 [2024-07-24 21:35:05.423263] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:20.526 [2024-07-24 21:35:05.473464] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:21.461 21:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:21.461 21:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:13:21.461 21:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:21.461 21:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:21.461 21:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:21.461 21:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:21.461 21:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@241 -- # rpc_cmd 00:13:21.461 21:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.461 21:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:21.461 [2024-07-24 21:35:06.162845] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:21.461 malloc0 00:13:21.461 [2024-07-24 21:35:06.193320] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:21.461 [2024-07-24 21:35:06.193729] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:21.461 21:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.461 21:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # bdevperf_pid=72970 00:13:21.461 21:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@252 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:13:21.461 21:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # waitforlisten 72970 /var/tmp/bdevperf.sock 00:13:21.461 21:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72970 ']' 00:13:21.461 21:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:21.461 21:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:21.461 21:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:21.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:21.461 21:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:21.461 21:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:21.461 [2024-07-24 21:35:06.266890] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:13:21.461 [2024-07-24 21:35:06.266983] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72970 ] 00:13:21.461 [2024-07-24 21:35:06.398513] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:21.720 [2024-07-24 21:35:06.482696] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:21.720 [2024-07-24 21:35:06.550871] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:21.720 21:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:21.720 21:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:13:21.720 21:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.djUwK7BrNP 00:13:21.978 21:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:13:22.237 [2024-07-24 21:35:07.029927] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:22.237 nvme0n1 00:13:22.237 21:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@262 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:22.495 Running I/O for 1 seconds... 00:13:23.430 00:13:23.430 Latency(us) 00:13:23.430 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:23.430 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:23.430 Verification LBA range: start 0x0 length 0x2000 00:13:23.430 nvme0n1 : 1.03 4494.64 17.56 0.00 0.00 28196.46 6166.34 17754.30 00:13:23.430 =================================================================================================================== 00:13:23.430 Total : 4494.64 17.56 0.00 0.00 28196.46 6166.34 17754.30 00:13:23.430 0 00:13:23.430 21:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # rpc_cmd save_config 00:13:23.430 21:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.430 21:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:23.430 21:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.430 21:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # tgtcfg='{ 00:13:23.430 "subsystems": [ 00:13:23.430 { 00:13:23.430 "subsystem": "keyring", 00:13:23.430 "config": [ 00:13:23.430 { 00:13:23.430 "method": "keyring_file_add_key", 00:13:23.430 "params": { 00:13:23.430 "name": "key0", 00:13:23.430 "path": "/tmp/tmp.djUwK7BrNP" 00:13:23.430 } 00:13:23.430 } 00:13:23.430 ] 00:13:23.430 }, 00:13:23.430 { 00:13:23.430 "subsystem": "iobuf", 00:13:23.430 "config": [ 00:13:23.430 { 00:13:23.430 "method": "iobuf_set_options", 00:13:23.430 "params": { 00:13:23.430 "small_pool_count": 8192, 00:13:23.430 "large_pool_count": 1024, 00:13:23.430 "small_bufsize": 8192, 00:13:23.430 "large_bufsize": 135168 00:13:23.430 } 00:13:23.430 } 00:13:23.430 ] 00:13:23.430 }, 00:13:23.430 { 00:13:23.430 "subsystem": "sock", 00:13:23.430 "config": [ 00:13:23.430 { 00:13:23.430 "method": "sock_set_default_impl", 00:13:23.430 "params": { 00:13:23.430 "impl_name": "uring" 00:13:23.430 } 00:13:23.430 }, 00:13:23.430 { 00:13:23.430 "method": "sock_impl_set_options", 00:13:23.430 "params": { 00:13:23.430 "impl_name": "ssl", 00:13:23.430 "recv_buf_size": 4096, 00:13:23.430 "send_buf_size": 4096, 00:13:23.430 "enable_recv_pipe": true, 00:13:23.430 "enable_quickack": false, 00:13:23.430 "enable_placement_id": 0, 00:13:23.430 "enable_zerocopy_send_server": true, 00:13:23.430 "enable_zerocopy_send_client": false, 00:13:23.430 "zerocopy_threshold": 0, 00:13:23.430 "tls_version": 0, 00:13:23.430 "enable_ktls": false 00:13:23.430 } 00:13:23.430 }, 00:13:23.430 { 00:13:23.430 "method": "sock_impl_set_options", 00:13:23.430 "params": { 00:13:23.430 "impl_name": "posix", 00:13:23.430 "recv_buf_size": 2097152, 00:13:23.430 "send_buf_size": 2097152, 00:13:23.430 "enable_recv_pipe": true, 00:13:23.430 "enable_quickack": false, 00:13:23.430 "enable_placement_id": 0, 00:13:23.430 "enable_zerocopy_send_server": true, 00:13:23.430 "enable_zerocopy_send_client": false, 00:13:23.430 "zerocopy_threshold": 0, 00:13:23.430 "tls_version": 0, 00:13:23.430 "enable_ktls": false 00:13:23.430 } 00:13:23.430 }, 00:13:23.430 { 00:13:23.430 "method": "sock_impl_set_options", 00:13:23.430 "params": { 00:13:23.430 "impl_name": "uring", 00:13:23.430 "recv_buf_size": 2097152, 00:13:23.430 "send_buf_size": 2097152, 00:13:23.430 "enable_recv_pipe": true, 00:13:23.430 "enable_quickack": false, 00:13:23.430 "enable_placement_id": 0, 00:13:23.430 "enable_zerocopy_send_server": false, 00:13:23.430 "enable_zerocopy_send_client": false, 00:13:23.431 "zerocopy_threshold": 0, 00:13:23.431 "tls_version": 0, 00:13:23.431 "enable_ktls": false 00:13:23.431 } 00:13:23.431 } 00:13:23.431 ] 00:13:23.431 }, 00:13:23.431 { 00:13:23.431 "subsystem": "vmd", 00:13:23.431 "config": [] 00:13:23.431 }, 00:13:23.431 { 00:13:23.431 "subsystem": "accel", 00:13:23.431 "config": [ 00:13:23.431 { 00:13:23.431 "method": "accel_set_options", 00:13:23.431 "params": { 00:13:23.431 "small_cache_size": 128, 00:13:23.431 "large_cache_size": 16, 00:13:23.431 "task_count": 2048, 00:13:23.431 "sequence_count": 2048, 00:13:23.431 "buf_count": 2048 00:13:23.431 } 00:13:23.431 } 00:13:23.431 ] 00:13:23.431 }, 00:13:23.431 { 00:13:23.431 "subsystem": "bdev", 00:13:23.431 "config": [ 00:13:23.431 { 00:13:23.431 "method": "bdev_set_options", 00:13:23.431 "params": { 00:13:23.431 "bdev_io_pool_size": 65535, 00:13:23.431 "bdev_io_cache_size": 256, 00:13:23.431 "bdev_auto_examine": true, 00:13:23.431 "iobuf_small_cache_size": 128, 00:13:23.431 "iobuf_large_cache_size": 16 00:13:23.431 } 00:13:23.431 }, 00:13:23.431 { 00:13:23.431 "method": "bdev_raid_set_options", 00:13:23.431 "params": { 00:13:23.431 "process_window_size_kb": 1024, 00:13:23.431 "process_max_bandwidth_mb_sec": 0 00:13:23.431 } 00:13:23.431 }, 00:13:23.431 { 00:13:23.431 "method": "bdev_iscsi_set_options", 00:13:23.431 "params": { 00:13:23.431 "timeout_sec": 30 00:13:23.431 } 00:13:23.431 }, 00:13:23.431 { 00:13:23.431 "method": "bdev_nvme_set_options", 00:13:23.431 "params": { 00:13:23.431 "action_on_timeout": "none", 00:13:23.431 "timeout_us": 0, 00:13:23.431 "timeout_admin_us": 0, 00:13:23.431 "keep_alive_timeout_ms": 10000, 00:13:23.431 "arbitration_burst": 0, 00:13:23.431 "low_priority_weight": 0, 00:13:23.431 "medium_priority_weight": 0, 00:13:23.431 "high_priority_weight": 0, 00:13:23.431 "nvme_adminq_poll_period_us": 10000, 00:13:23.431 "nvme_ioq_poll_period_us": 0, 00:13:23.431 "io_queue_requests": 0, 00:13:23.431 "delay_cmd_submit": true, 00:13:23.431 "transport_retry_count": 4, 00:13:23.431 "bdev_retry_count": 3, 00:13:23.431 "transport_ack_timeout": 0, 00:13:23.431 "ctrlr_loss_timeout_sec": 0, 00:13:23.431 "reconnect_delay_sec": 0, 00:13:23.431 "fast_io_fail_timeout_sec": 0, 00:13:23.431 "disable_auto_failback": false, 00:13:23.431 "generate_uuids": false, 00:13:23.431 "transport_tos": 0, 00:13:23.431 "nvme_error_stat": false, 00:13:23.431 "rdma_srq_size": 0, 00:13:23.431 "io_path_stat": false, 00:13:23.431 "allow_accel_sequence": false, 00:13:23.431 "rdma_max_cq_size": 0, 00:13:23.431 "rdma_cm_event_timeout_ms": 0, 00:13:23.431 "dhchap_digests": [ 00:13:23.431 "sha256", 00:13:23.431 "sha384", 00:13:23.431 "sha512" 00:13:23.431 ], 00:13:23.431 "dhchap_dhgroups": [ 00:13:23.431 "null", 00:13:23.431 "ffdhe2048", 00:13:23.431 "ffdhe3072", 00:13:23.431 "ffdhe4096", 00:13:23.431 "ffdhe6144", 00:13:23.431 "ffdhe8192" 00:13:23.431 ] 00:13:23.431 } 00:13:23.431 }, 00:13:23.431 { 00:13:23.431 "method": "bdev_nvme_set_hotplug", 00:13:23.431 "params": { 00:13:23.431 "period_us": 100000, 00:13:23.431 "enable": false 00:13:23.431 } 00:13:23.431 }, 00:13:23.431 { 00:13:23.431 "method": "bdev_malloc_create", 00:13:23.431 "params": { 00:13:23.431 "name": "malloc0", 00:13:23.431 "num_blocks": 8192, 00:13:23.431 "block_size": 4096, 00:13:23.431 "physical_block_size": 4096, 00:13:23.431 "uuid": "c4142033-5047-4bc3-9b84-3d7982fa65ab", 00:13:23.431 "optimal_io_boundary": 0, 00:13:23.431 "md_size": 0, 00:13:23.431 "dif_type": 0, 00:13:23.431 "dif_is_head_of_md": false, 00:13:23.431 "dif_pi_format": 0 00:13:23.431 } 00:13:23.431 }, 00:13:23.431 { 00:13:23.431 "method": "bdev_wait_for_examine" 00:13:23.431 } 00:13:23.431 ] 00:13:23.431 }, 00:13:23.431 { 00:13:23.431 "subsystem": "nbd", 00:13:23.431 "config": [] 00:13:23.431 }, 00:13:23.431 { 00:13:23.431 "subsystem": "scheduler", 00:13:23.431 "config": [ 00:13:23.431 { 00:13:23.431 "method": "framework_set_scheduler", 00:13:23.431 "params": { 00:13:23.431 "name": "static" 00:13:23.431 } 00:13:23.431 } 00:13:23.431 ] 00:13:23.431 }, 00:13:23.431 { 00:13:23.431 "subsystem": "nvmf", 00:13:23.431 "config": [ 00:13:23.431 { 00:13:23.431 "method": "nvmf_set_config", 00:13:23.431 "params": { 00:13:23.431 "discovery_filter": "match_any", 00:13:23.431 "admin_cmd_passthru": { 00:13:23.431 "identify_ctrlr": false 00:13:23.431 } 00:13:23.431 } 00:13:23.431 }, 00:13:23.431 { 00:13:23.431 "method": "nvmf_set_max_subsystems", 00:13:23.431 "params": { 00:13:23.431 "max_subsystems": 1024 00:13:23.431 } 00:13:23.431 }, 00:13:23.431 { 00:13:23.431 "method": "nvmf_set_crdt", 00:13:23.431 "params": { 00:13:23.431 "crdt1": 0, 00:13:23.431 "crdt2": 0, 00:13:23.431 "crdt3": 0 00:13:23.431 } 00:13:23.431 }, 00:13:23.431 { 00:13:23.431 "method": "nvmf_create_transport", 00:13:23.431 "params": { 00:13:23.431 "trtype": "TCP", 00:13:23.431 "max_queue_depth": 128, 00:13:23.431 "max_io_qpairs_per_ctrlr": 127, 00:13:23.431 "in_capsule_data_size": 4096, 00:13:23.431 "max_io_size": 131072, 00:13:23.431 "io_unit_size": 131072, 00:13:23.431 "max_aq_depth": 128, 00:13:23.431 "num_shared_buffers": 511, 00:13:23.431 "buf_cache_size": 4294967295, 00:13:23.431 "dif_insert_or_strip": false, 00:13:23.431 "zcopy": false, 00:13:23.431 "c2h_success": false, 00:13:23.431 "sock_priority": 0, 00:13:23.431 "abort_timeout_sec": 1, 00:13:23.431 "ack_timeout": 0, 00:13:23.431 "data_wr_pool_size": 0 00:13:23.431 } 00:13:23.431 }, 00:13:23.431 { 00:13:23.431 "method": "nvmf_create_subsystem", 00:13:23.431 "params": { 00:13:23.431 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:23.431 "allow_any_host": false, 00:13:23.431 "serial_number": "00000000000000000000", 00:13:23.431 "model_number": "SPDK bdev Controller", 00:13:23.431 "max_namespaces": 32, 00:13:23.431 "min_cntlid": 1, 00:13:23.431 "max_cntlid": 65519, 00:13:23.431 "ana_reporting": false 00:13:23.431 } 00:13:23.431 }, 00:13:23.431 { 00:13:23.431 "method": "nvmf_subsystem_add_host", 00:13:23.431 "params": { 00:13:23.431 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:23.431 "host": "nqn.2016-06.io.spdk:host1", 00:13:23.431 "psk": "key0" 00:13:23.431 } 00:13:23.431 }, 00:13:23.431 { 00:13:23.431 "method": "nvmf_subsystem_add_ns", 00:13:23.431 "params": { 00:13:23.431 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:23.431 "namespace": { 00:13:23.431 "nsid": 1, 00:13:23.431 "bdev_name": "malloc0", 00:13:23.431 "nguid": "C414203350474BC39B843D7982FA65AB", 00:13:23.431 "uuid": "c4142033-5047-4bc3-9b84-3d7982fa65ab", 00:13:23.431 "no_auto_visible": false 00:13:23.431 } 00:13:23.431 } 00:13:23.431 }, 00:13:23.431 { 00:13:23.431 "method": "nvmf_subsystem_add_listener", 00:13:23.431 "params": { 00:13:23.431 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:23.431 "listen_address": { 00:13:23.431 "trtype": "TCP", 00:13:23.431 "adrfam": "IPv4", 00:13:23.431 "traddr": "10.0.0.2", 00:13:23.431 "trsvcid": "4420" 00:13:23.431 }, 00:13:23.431 "secure_channel": false, 00:13:23.431 "sock_impl": "ssl" 00:13:23.431 } 00:13:23.431 } 00:13:23.431 ] 00:13:23.431 } 00:13:23.431 ] 00:13:23.431 }' 00:13:23.431 21:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:13:23.998 21:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # bperfcfg='{ 00:13:23.998 "subsystems": [ 00:13:23.998 { 00:13:23.998 "subsystem": "keyring", 00:13:23.998 "config": [ 00:13:23.998 { 00:13:23.998 "method": "keyring_file_add_key", 00:13:23.998 "params": { 00:13:23.998 "name": "key0", 00:13:23.998 "path": "/tmp/tmp.djUwK7BrNP" 00:13:23.998 } 00:13:23.998 } 00:13:23.998 ] 00:13:23.998 }, 00:13:23.998 { 00:13:23.998 "subsystem": "iobuf", 00:13:23.998 "config": [ 00:13:23.998 { 00:13:23.998 "method": "iobuf_set_options", 00:13:23.998 "params": { 00:13:23.998 "small_pool_count": 8192, 00:13:23.998 "large_pool_count": 1024, 00:13:23.998 "small_bufsize": 8192, 00:13:23.998 "large_bufsize": 135168 00:13:23.998 } 00:13:23.998 } 00:13:23.998 ] 00:13:23.998 }, 00:13:23.998 { 00:13:23.998 "subsystem": "sock", 00:13:23.998 "config": [ 00:13:23.998 { 00:13:23.998 "method": "sock_set_default_impl", 00:13:23.998 "params": { 00:13:23.998 "impl_name": "uring" 00:13:23.998 } 00:13:23.998 }, 00:13:23.998 { 00:13:23.998 "method": "sock_impl_set_options", 00:13:23.998 "params": { 00:13:23.998 "impl_name": "ssl", 00:13:23.998 "recv_buf_size": 4096, 00:13:23.998 "send_buf_size": 4096, 00:13:23.998 "enable_recv_pipe": true, 00:13:23.998 "enable_quickack": false, 00:13:23.998 "enable_placement_id": 0, 00:13:23.998 "enable_zerocopy_send_server": true, 00:13:23.998 "enable_zerocopy_send_client": false, 00:13:23.998 "zerocopy_threshold": 0, 00:13:23.998 "tls_version": 0, 00:13:23.998 "enable_ktls": false 00:13:23.998 } 00:13:23.998 }, 00:13:23.998 { 00:13:23.998 "method": "sock_impl_set_options", 00:13:23.998 "params": { 00:13:23.998 "impl_name": "posix", 00:13:23.998 "recv_buf_size": 2097152, 00:13:23.998 "send_buf_size": 2097152, 00:13:23.998 "enable_recv_pipe": true, 00:13:23.998 "enable_quickack": false, 00:13:23.998 "enable_placement_id": 0, 00:13:23.998 "enable_zerocopy_send_server": true, 00:13:23.998 "enable_zerocopy_send_client": false, 00:13:23.998 "zerocopy_threshold": 0, 00:13:23.998 "tls_version": 0, 00:13:23.998 "enable_ktls": false 00:13:23.998 } 00:13:23.998 }, 00:13:23.998 { 00:13:23.998 "method": "sock_impl_set_options", 00:13:23.998 "params": { 00:13:23.998 "impl_name": "uring", 00:13:23.998 "recv_buf_size": 2097152, 00:13:23.998 "send_buf_size": 2097152, 00:13:23.998 "enable_recv_pipe": true, 00:13:23.998 "enable_quickack": false, 00:13:23.998 "enable_placement_id": 0, 00:13:23.998 "enable_zerocopy_send_server": false, 00:13:23.998 "enable_zerocopy_send_client": false, 00:13:23.998 "zerocopy_threshold": 0, 00:13:23.998 "tls_version": 0, 00:13:23.998 "enable_ktls": false 00:13:23.998 } 00:13:23.998 } 00:13:23.998 ] 00:13:23.998 }, 00:13:23.998 { 00:13:23.998 "subsystem": "vmd", 00:13:23.998 "config": [] 00:13:23.998 }, 00:13:23.998 { 00:13:23.998 "subsystem": "accel", 00:13:23.998 "config": [ 00:13:23.998 { 00:13:23.998 "method": "accel_set_options", 00:13:23.998 "params": { 00:13:23.998 "small_cache_size": 128, 00:13:23.998 "large_cache_size": 16, 00:13:23.998 "task_count": 2048, 00:13:23.998 "sequence_count": 2048, 00:13:23.998 "buf_count": 2048 00:13:23.998 } 00:13:23.998 } 00:13:23.998 ] 00:13:23.998 }, 00:13:23.998 { 00:13:23.998 "subsystem": "bdev", 00:13:23.998 "config": [ 00:13:23.998 { 00:13:23.998 "method": "bdev_set_options", 00:13:23.998 "params": { 00:13:23.998 "bdev_io_pool_size": 65535, 00:13:23.998 "bdev_io_cache_size": 256, 00:13:23.998 "bdev_auto_examine": true, 00:13:23.998 "iobuf_small_cache_size": 128, 00:13:23.998 "iobuf_large_cache_size": 16 00:13:23.998 } 00:13:23.998 }, 00:13:23.998 { 00:13:23.998 "method": "bdev_raid_set_options", 00:13:23.998 "params": { 00:13:23.998 "process_window_size_kb": 1024, 00:13:23.998 "process_max_bandwidth_mb_sec": 0 00:13:23.998 } 00:13:23.998 }, 00:13:23.998 { 00:13:23.998 "method": "bdev_iscsi_set_options", 00:13:23.998 "params": { 00:13:23.998 "timeout_sec": 30 00:13:23.998 } 00:13:23.998 }, 00:13:23.998 { 00:13:23.998 "method": "bdev_nvme_set_options", 00:13:23.998 "params": { 00:13:23.998 "action_on_timeout": "none", 00:13:23.998 "timeout_us": 0, 00:13:23.998 "timeout_admin_us": 0, 00:13:23.998 "keep_alive_timeout_ms": 10000, 00:13:23.998 "arbitration_burst": 0, 00:13:23.998 "low_priority_weight": 0, 00:13:23.998 "medium_priority_weight": 0, 00:13:23.998 "high_priority_weight": 0, 00:13:23.998 "nvme_adminq_poll_period_us": 10000, 00:13:23.998 "nvme_ioq_poll_period_us": 0, 00:13:23.998 "io_queue_requests": 512, 00:13:23.998 "delay_cmd_submit": true, 00:13:23.998 "transport_retry_count": 4, 00:13:23.998 "bdev_retry_count": 3, 00:13:23.998 "transport_ack_timeout": 0, 00:13:23.998 "ctrlr_loss_timeout_sec": 0, 00:13:23.998 "reconnect_delay_sec": 0, 00:13:23.998 "fast_io_fail_timeout_sec": 0, 00:13:23.998 "disable_auto_failback": false, 00:13:23.998 "generate_uuids": false, 00:13:23.998 "transport_tos": 0, 00:13:23.998 "nvme_error_stat": false, 00:13:23.998 "rdma_srq_size": 0, 00:13:23.998 "io_path_stat": false, 00:13:23.998 "allow_accel_sequence": false, 00:13:23.998 "rdma_max_cq_size": 0, 00:13:23.998 "rdma_cm_event_timeout_ms": 0, 00:13:23.998 "dhchap_digests": [ 00:13:23.998 "sha256", 00:13:23.998 "sha384", 00:13:23.998 "sha512" 00:13:23.998 ], 00:13:23.998 "dhchap_dhgroups": [ 00:13:23.998 "null", 00:13:23.998 "ffdhe2048", 00:13:23.998 "ffdhe3072", 00:13:23.998 "ffdhe4096", 00:13:23.998 "ffdhe6144", 00:13:23.998 "ffdhe8192" 00:13:23.998 ] 00:13:23.998 } 00:13:23.998 }, 00:13:23.998 { 00:13:23.998 "method": "bdev_nvme_attach_controller", 00:13:23.998 "params": { 00:13:23.998 "name": "nvme0", 00:13:23.998 "trtype": "TCP", 00:13:23.998 "adrfam": "IPv4", 00:13:23.998 "traddr": "10.0.0.2", 00:13:23.998 "trsvcid": "4420", 00:13:23.998 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:23.998 "prchk_reftag": false, 00:13:23.998 "prchk_guard": false, 00:13:23.998 "ctrlr_loss_timeout_sec": 0, 00:13:23.998 "reconnect_delay_sec": 0, 00:13:23.998 "fast_io_fail_timeout_sec": 0, 00:13:23.998 "psk": "key0", 00:13:23.998 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:23.998 "hdgst": false, 00:13:23.998 "ddgst": false 00:13:23.998 } 00:13:23.998 }, 00:13:23.998 { 00:13:23.998 "method": "bdev_nvme_set_hotplug", 00:13:23.998 "params": { 00:13:23.998 "period_us": 100000, 00:13:23.998 "enable": false 00:13:23.998 } 00:13:23.998 }, 00:13:23.998 { 00:13:23.998 "method": "bdev_enable_histogram", 00:13:23.998 "params": { 00:13:23.998 "name": "nvme0n1", 00:13:23.998 "enable": true 00:13:23.998 } 00:13:23.999 }, 00:13:23.999 { 00:13:23.999 "method": "bdev_wait_for_examine" 00:13:23.999 } 00:13:23.999 ] 00:13:23.999 }, 00:13:23.999 { 00:13:23.999 "subsystem": "nbd", 00:13:23.999 "config": [] 00:13:23.999 } 00:13:23.999 ] 00:13:23.999 }' 00:13:23.999 21:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # killprocess 72970 00:13:23.999 21:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72970 ']' 00:13:23.999 21:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72970 00:13:23.999 21:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:13:23.999 21:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:23.999 21:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72970 00:13:23.999 killing process with pid 72970 00:13:23.999 Received shutdown signal, test time was about 1.000000 seconds 00:13:23.999 00:13:23.999 Latency(us) 00:13:23.999 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:23.999 =================================================================================================================== 00:13:23.999 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:23.999 21:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:13:23.999 21:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:13:23.999 21:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72970' 00:13:23.999 21:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72970 00:13:23.999 21:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72970 00:13:24.258 21:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@269 -- # killprocess 72938 00:13:24.258 21:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72938 ']' 00:13:24.258 21:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72938 00:13:24.258 21:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:13:24.258 21:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:24.258 21:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72938 00:13:24.258 killing process with pid 72938 00:13:24.258 21:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:24.258 21:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:24.258 21:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72938' 00:13:24.258 21:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72938 00:13:24.258 21:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72938 00:13:24.517 21:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # nvmfappstart -c /dev/fd/62 00:13:24.517 21:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:24.517 21:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:24.517 21:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # echo '{ 00:13:24.517 "subsystems": [ 00:13:24.517 { 00:13:24.517 "subsystem": "keyring", 00:13:24.517 "config": [ 00:13:24.517 { 00:13:24.517 "method": "keyring_file_add_key", 00:13:24.517 "params": { 00:13:24.517 "name": "key0", 00:13:24.517 "path": "/tmp/tmp.djUwK7BrNP" 00:13:24.517 } 00:13:24.517 } 00:13:24.517 ] 00:13:24.517 }, 00:13:24.517 { 00:13:24.517 "subsystem": "iobuf", 00:13:24.517 "config": [ 00:13:24.517 { 00:13:24.517 "method": "iobuf_set_options", 00:13:24.517 "params": { 00:13:24.517 "small_pool_count": 8192, 00:13:24.517 "large_pool_count": 1024, 00:13:24.517 "small_bufsize": 8192, 00:13:24.518 "large_bufsize": 135168 00:13:24.518 } 00:13:24.518 } 00:13:24.518 ] 00:13:24.518 }, 00:13:24.518 { 00:13:24.518 "subsystem": "sock", 00:13:24.518 "config": [ 00:13:24.518 { 00:13:24.518 "method": "sock_set_default_impl", 00:13:24.518 "params": { 00:13:24.518 "impl_name": "uring" 00:13:24.518 } 00:13:24.518 }, 00:13:24.518 { 00:13:24.518 "method": "sock_impl_set_options", 00:13:24.518 "params": { 00:13:24.518 "impl_name": "ssl", 00:13:24.518 "recv_buf_size": 4096, 00:13:24.518 "send_buf_size": 4096, 00:13:24.518 "enable_recv_pipe": true, 00:13:24.518 "enable_quickack": false, 00:13:24.518 "enable_placement_id": 0, 00:13:24.518 "enable_zerocopy_send_server": true, 00:13:24.518 "enable_zerocopy_send_client": false, 00:13:24.518 "zerocopy_threshold": 0, 00:13:24.518 "tls_version": 0, 00:13:24.518 "enable_ktls": false 00:13:24.518 } 00:13:24.518 }, 00:13:24.518 { 00:13:24.518 "method": "sock_impl_set_options", 00:13:24.518 "params": { 00:13:24.518 "impl_name": "posix", 00:13:24.518 "recv_buf_size": 2097152, 00:13:24.518 "send_buf_size": 2097152, 00:13:24.518 "enable_recv_pipe": true, 00:13:24.518 "enable_quickack": false, 00:13:24.518 "enable_placement_id": 0, 00:13:24.518 "enable_zerocopy_send_server": true, 00:13:24.518 "enable_zerocopy_send_client": false, 00:13:24.518 "zerocopy_threshold": 0, 00:13:24.518 "tls_version": 0, 00:13:24.518 "enable_ktls": false 00:13:24.518 } 00:13:24.518 }, 00:13:24.518 { 00:13:24.518 "method": "sock_impl_set_options", 00:13:24.518 "params": { 00:13:24.518 "impl_name": "uring", 00:13:24.518 "recv_buf_size": 2097152, 00:13:24.518 "send_buf_size": 2097152, 00:13:24.518 "enable_recv_pipe": true, 00:13:24.518 "enable_quickack": false, 00:13:24.518 "enable_placement_id": 0, 00:13:24.518 "enable_zerocopy_send_server": false, 00:13:24.518 "enable_zerocopy_send_client": false, 00:13:24.518 "zerocopy_threshold": 0, 00:13:24.518 "tls_version": 0, 00:13:24.518 "enable_ktls": false 00:13:24.518 } 00:13:24.518 } 00:13:24.518 ] 00:13:24.518 }, 00:13:24.518 { 00:13:24.518 "subsystem": "vmd", 00:13:24.518 "config": [] 00:13:24.518 }, 00:13:24.518 { 00:13:24.518 "subsystem": "accel", 00:13:24.518 "config": [ 00:13:24.518 { 00:13:24.518 "method": "accel_set_options", 00:13:24.518 "params": { 00:13:24.518 "small_cache_size": 128, 00:13:24.518 "large_cache_size": 16, 00:13:24.518 "task_count": 2048, 00:13:24.518 "sequence_count": 2048, 00:13:24.518 "buf_count": 2048 00:13:24.518 } 00:13:24.518 } 00:13:24.518 ] 00:13:24.518 }, 00:13:24.518 { 00:13:24.518 "subsystem": "bdev", 00:13:24.518 "config": [ 00:13:24.518 { 00:13:24.518 "method": "bdev_set_options", 00:13:24.518 "params": { 00:13:24.518 "bdev_io_pool_size": 65535, 00:13:24.518 "bdev_io_cache_size": 256, 00:13:24.518 "bdev_auto_examine": true, 00:13:24.518 "iobuf_small_cache_size": 128, 00:13:24.518 "iobuf_large_cache_size": 16 00:13:24.518 } 00:13:24.518 }, 00:13:24.518 { 00:13:24.518 "method": "bdev_raid_set_options", 00:13:24.518 "params": { 00:13:24.518 "process_window_size_kb": 1024, 00:13:24.518 "process_max_bandwidth_mb_sec": 0 00:13:24.518 } 00:13:24.518 }, 00:13:24.518 { 00:13:24.518 "method": "bdev_iscsi_set_options", 00:13:24.518 "params": { 00:13:24.518 "timeout_sec": 30 00:13:24.518 } 00:13:24.518 }, 00:13:24.518 { 00:13:24.518 "method": "bdev_nvme_set_options", 00:13:24.518 "params": { 00:13:24.518 "action_on_timeout": "none", 00:13:24.518 "timeout_us": 0, 00:13:24.518 "timeout_admin_us": 0, 00:13:24.518 "keep_alive_timeout_ms": 10000, 00:13:24.518 "arbitration_burst": 0, 00:13:24.518 "low_priority_weight": 0, 00:13:24.518 "medium_priority_weight": 0, 00:13:24.518 "high_priority_weight": 0, 00:13:24.518 "nvme_adminq_poll_period_us": 10000, 00:13:24.518 "nvme_ioq_poll_period_us": 0, 00:13:24.518 "io_queue_requests": 0, 00:13:24.518 "delay_cmd_submit": true, 00:13:24.518 "transport_retry_count": 4, 00:13:24.518 "bdev_retry_count": 3, 00:13:24.518 "transport_ack_timeout": 0, 00:13:24.518 "ctrlr_loss_timeout_sec": 0, 00:13:24.518 "reconnect_delay_sec": 0, 00:13:24.518 "fast_io_fail_timeout_sec": 0, 00:13:24.518 "disable_auto_failback": false, 00:13:24.518 "generate_uuids": false, 00:13:24.518 "transport_tos": 0, 00:13:24.518 "nvme_error_stat": false, 00:13:24.518 "rdma_srq_size": 0, 00:13:24.518 "io_path_stat": false, 00:13:24.518 "allow_accel_sequence": false, 00:13:24.518 "rdma_max_cq_size": 0, 00:13:24.518 "rdma_cm_event_timeout_ms": 0, 00:13:24.518 "dhchap_digests": [ 00:13:24.518 "sha256", 00:13:24.518 "sha384", 00:13:24.518 "sha512" 00:13:24.518 ], 00:13:24.518 "dhchap_dhgroups": [ 00:13:24.518 "null", 00:13:24.518 "ffdhe2048", 00:13:24.518 "ffdhe3072", 00:13:24.518 "ffdhe4096", 00:13:24.518 "ffdhe6144", 00:13:24.518 "ffdhe8192" 00:13:24.518 ] 00:13:24.518 } 00:13:24.518 }, 00:13:24.518 { 00:13:24.518 "method": "bdev_nvme_set_hotplug", 00:13:24.518 "params": { 00:13:24.518 "period_us": 100000, 00:13:24.518 "enable": false 00:13:24.518 } 00:13:24.518 }, 00:13:24.518 { 00:13:24.518 "method": "bdev_malloc_create", 00:13:24.518 "params": { 00:13:24.518 "name": "malloc0", 00:13:24.518 "num_blocks": 8192, 00:13:24.518 "block_size": 4096, 00:13:24.518 "physical_block_size": 4096, 00:13:24.518 "uuid": "c4142033-5047-4bc3-9b84-3d7982fa65ab", 00:13:24.518 "optimal_io_boundary": 0, 00:13:24.518 "md_size": 0, 00:13:24.518 "dif_type": 0, 00:13:24.518 "dif_is_head_of_md": false, 00:13:24.518 "dif_pi_format": 0 00:13:24.518 } 00:13:24.518 }, 00:13:24.518 { 00:13:24.518 "method": "bdev_wait_for_examine" 00:13:24.518 } 00:13:24.518 ] 00:13:24.518 }, 00:13:24.518 { 00:13:24.518 "subsystem": "nbd", 00:13:24.518 "config": [] 00:13:24.518 }, 00:13:24.518 { 00:13:24.518 "subsystem": "scheduler", 00:13:24.518 "config": [ 00:13:24.518 { 00:13:24.518 "method": "framework_set_scheduler", 00:13:24.518 "params": { 00:13:24.518 "name": "static" 00:13:24.518 } 00:13:24.518 } 00:13:24.518 ] 00:13:24.518 }, 00:13:24.518 { 00:13:24.518 "subsystem": "nvmf", 00:13:24.518 "config": [ 00:13:24.518 { 00:13:24.518 "method": "nvmf_set_config", 00:13:24.518 "params": { 00:13:24.518 "discovery_filter": "match_any", 00:13:24.518 "admin_cmd_passthru": { 00:13:24.518 "identify_ctrlr": false 00:13:24.518 } 00:13:24.518 } 00:13:24.518 }, 00:13:24.518 { 00:13:24.518 "method": "nvmf_set_max_subsystems", 00:13:24.518 "params": { 00:13:24.518 "max_subsystems": 1024 00:13:24.518 } 00:13:24.518 }, 00:13:24.518 { 00:13:24.518 "method": "nvmf_set_crdt", 00:13:24.518 "params": { 00:13:24.518 "crdt1": 0, 00:13:24.518 "crdt2": 0, 00:13:24.518 "crdt3": 0 00:13:24.518 } 00:13:24.518 }, 00:13:24.518 { 00:13:24.518 "method": "nvmf_create_transport", 00:13:24.518 "params": { 00:13:24.518 "trtype": "TCP", 00:13:24.518 "max_queue_depth": 128, 00:13:24.518 "max_io_qpairs_per_ctrlr": 127, 00:13:24.518 "in_capsule_data_size": 4096, 00:13:24.518 "max_io_size": 131072, 00:13:24.518 "io_unit_size": 131072, 00:13:24.518 "max_aq_depth": 128, 00:13:24.518 "num_shared_buffers": 511, 00:13:24.518 "buf_cache_size": 4294967295, 00:13:24.518 "dif_insert_or_strip": false, 00:13:24.518 "zcopy": false, 00:13:24.518 "c2h_success": false, 00:13:24.518 "sock_priority": 0, 00:13:24.518 "abort_timeout_sec": 1, 00:13:24.518 "ack_timeout": 0, 00:13:24.518 "data_wr_pool_size": 0 00:13:24.518 } 00:13:24.518 }, 00:13:24.518 { 00:13:24.518 "method": "nvmf_create_subsystem", 00:13:24.518 "params": { 00:13:24.518 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:24.518 "allow_any_host": false, 00:13:24.518 "serial_number": "00000000000000000000", 00:13:24.518 "model_number": "SPDK bdev Controller", 00:13:24.518 "max_namespaces": 32, 00:13:24.518 "min_cntlid": 1, 00:13:24.518 "max_cntlid": 65519, 00:13:24.518 "ana_reporting": false 00:13:24.518 } 00:13:24.518 }, 00:13:24.518 { 00:13:24.518 "method": "nvmf_subsystem_add_host", 00:13:24.518 "params": { 00:13:24.518 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:24.518 "host": "nqn.2016-06.io.spdk:host1", 00:13:24.518 "psk": "key0" 00:13:24.518 } 00:13:24.518 }, 00:13:24.518 { 00:13:24.518 "method": "nvmf_subsystem_add_ns", 00:13:24.518 "params": { 00:13:24.518 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:24.518 "namespace": { 00:13:24.518 "nsid": 1, 00:13:24.518 "bdev_name": "malloc0", 00:13:24.518 "nguid": "C414203350474BC39B843D7982FA65AB", 00:13:24.518 "uuid": "c4142033-5047-4bc3-9b84-3d7982fa65ab", 00:13:24.518 "no_auto_visible": false 00:13:24.518 } 00:13:24.519 } 00:13:24.519 }, 00:13:24.519 { 00:13:24.519 "method": "nvmf_subsystem_add_listener", 00:13:24.519 "params": { 00:13:24.519 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:24.519 "listen_address": { 00:13:24.519 "trtype": "TCP", 00:13:24.519 "adrfam": "IPv4", 00:13:24.519 "traddr": "10.0.0.2", 00:13:24.519 "trsvcid": "4420" 00:13:24.519 }, 00:13:24.519 "secure_channel": false, 00:13:24.519 "sock_impl": "ssl" 00:13:24.519 } 00:13:24.519 } 00:13:24.519 ] 00:13:24.519 } 00:13:24.519 ] 00:13:24.519 }' 00:13:24.519 21:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:24.519 21:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=73018 00:13:24.519 21:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:13:24.519 21:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 73018 00:13:24.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:24.519 21:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 73018 ']' 00:13:24.519 21:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:24.519 21:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:24.519 21:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:24.519 21:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:24.519 21:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:24.519 [2024-07-24 21:35:09.373467] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:13:24.519 [2024-07-24 21:35:09.373777] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:24.519 [2024-07-24 21:35:09.510855] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:24.777 [2024-07-24 21:35:09.605606] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:24.777 [2024-07-24 21:35:09.605663] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:24.777 [2024-07-24 21:35:09.605691] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:24.777 [2024-07-24 21:35:09.605698] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:24.777 [2024-07-24 21:35:09.605704] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:24.777 [2024-07-24 21:35:09.605777] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:24.777 [2024-07-24 21:35:09.770061] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:25.035 [2024-07-24 21:35:09.843773] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:25.035 [2024-07-24 21:35:09.875730] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:25.035 [2024-07-24 21:35:09.884792] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:25.294 21:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:25.294 21:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:13:25.294 21:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:25.294 21:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:25.294 21:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:25.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:25.552 21:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:25.552 21:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # bdevperf_pid=73050 00:13:25.552 21:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@275 -- # waitforlisten 73050 /var/tmp/bdevperf.sock 00:13:25.552 21:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 73050 ']' 00:13:25.552 21:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:25.552 21:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:25.552 21:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:25.552 21:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:25.552 21:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:25.552 21:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:13:25.552 21:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # echo '{ 00:13:25.552 "subsystems": [ 00:13:25.552 { 00:13:25.552 "subsystem": "keyring", 00:13:25.552 "config": [ 00:13:25.552 { 00:13:25.552 "method": "keyring_file_add_key", 00:13:25.552 "params": { 00:13:25.552 "name": "key0", 00:13:25.552 "path": "/tmp/tmp.djUwK7BrNP" 00:13:25.552 } 00:13:25.552 } 00:13:25.552 ] 00:13:25.552 }, 00:13:25.552 { 00:13:25.552 "subsystem": "iobuf", 00:13:25.552 "config": [ 00:13:25.552 { 00:13:25.552 "method": "iobuf_set_options", 00:13:25.552 "params": { 00:13:25.552 "small_pool_count": 8192, 00:13:25.552 "large_pool_count": 1024, 00:13:25.552 "small_bufsize": 8192, 00:13:25.552 "large_bufsize": 135168 00:13:25.552 } 00:13:25.552 } 00:13:25.552 ] 00:13:25.552 }, 00:13:25.552 { 00:13:25.552 "subsystem": "sock", 00:13:25.552 "config": [ 00:13:25.552 { 00:13:25.552 "method": "sock_set_default_impl", 00:13:25.552 "params": { 00:13:25.552 "impl_name": "uring" 00:13:25.552 } 00:13:25.552 }, 00:13:25.552 { 00:13:25.552 "method": "sock_impl_set_options", 00:13:25.552 "params": { 00:13:25.552 "impl_name": "ssl", 00:13:25.552 "recv_buf_size": 4096, 00:13:25.552 "send_buf_size": 4096, 00:13:25.552 "enable_recv_pipe": true, 00:13:25.552 "enable_quickack": false, 00:13:25.552 "enable_placement_id": 0, 00:13:25.552 "enable_zerocopy_send_server": true, 00:13:25.552 "enable_zerocopy_send_client": false, 00:13:25.552 "zerocopy_threshold": 0, 00:13:25.552 "tls_version": 0, 00:13:25.552 "enable_ktls": false 00:13:25.552 } 00:13:25.552 }, 00:13:25.552 { 00:13:25.552 "method": "sock_impl_set_options", 00:13:25.552 "params": { 00:13:25.552 "impl_name": "posix", 00:13:25.552 "recv_buf_size": 2097152, 00:13:25.552 "send_buf_size": 2097152, 00:13:25.552 "enable_recv_pipe": true, 00:13:25.552 "enable_quickack": false, 00:13:25.552 "enable_placement_id": 0, 00:13:25.552 "enable_zerocopy_send_server": true, 00:13:25.552 "enable_zerocopy_send_client": false, 00:13:25.552 "zerocopy_threshold": 0, 00:13:25.552 "tls_version": 0, 00:13:25.552 "enable_ktls": false 00:13:25.552 } 00:13:25.552 }, 00:13:25.552 { 00:13:25.553 "method": "sock_impl_set_options", 00:13:25.553 "params": { 00:13:25.553 "impl_name": "uring", 00:13:25.553 "recv_buf_size": 2097152, 00:13:25.553 "send_buf_size": 2097152, 00:13:25.553 "enable_recv_pipe": true, 00:13:25.553 "enable_quickack": false, 00:13:25.553 "enable_placement_id": 0, 00:13:25.553 "enable_zerocopy_send_server": false, 00:13:25.553 "enable_zerocopy_send_client": false, 00:13:25.553 "zerocopy_threshold": 0, 00:13:25.553 "tls_version": 0, 00:13:25.553 "enable_ktls": false 00:13:25.553 } 00:13:25.553 } 00:13:25.553 ] 00:13:25.553 }, 00:13:25.553 { 00:13:25.553 "subsystem": "vmd", 00:13:25.553 "config": [] 00:13:25.553 }, 00:13:25.553 { 00:13:25.553 "subsystem": "accel", 00:13:25.553 "config": [ 00:13:25.553 { 00:13:25.553 "method": "accel_set_options", 00:13:25.553 "params": { 00:13:25.553 "small_cache_size": 128, 00:13:25.553 "large_cache_size": 16, 00:13:25.553 "task_count": 2048, 00:13:25.553 "sequence_count": 2048, 00:13:25.553 "buf_count": 2048 00:13:25.553 } 00:13:25.553 } 00:13:25.553 ] 00:13:25.553 }, 00:13:25.553 { 00:13:25.553 "subsystem": "bdev", 00:13:25.553 "config": [ 00:13:25.553 { 00:13:25.553 "method": "bdev_set_options", 00:13:25.553 "params": { 00:13:25.553 "bdev_io_pool_size": 65535, 00:13:25.553 "bdev_io_cache_size": 256, 00:13:25.553 "bdev_auto_examine": true, 00:13:25.553 "iobuf_small_cache_size": 128, 00:13:25.553 "iobuf_large_cache_size": 16 00:13:25.553 } 00:13:25.553 }, 00:13:25.553 { 00:13:25.553 "method": "bdev_raid_set_options", 00:13:25.553 "params": { 00:13:25.553 "process_window_size_kb": 1024, 00:13:25.553 "process_max_bandwidth_mb_sec": 0 00:13:25.553 } 00:13:25.553 }, 00:13:25.553 { 00:13:25.553 "method": "bdev_iscsi_set_options", 00:13:25.553 "params": { 00:13:25.553 "timeout_sec": 30 00:13:25.553 } 00:13:25.553 }, 00:13:25.553 { 00:13:25.553 "method": "bdev_nvme_set_options", 00:13:25.553 "params": { 00:13:25.553 "action_on_timeout": "none", 00:13:25.553 "timeout_us": 0, 00:13:25.553 "timeout_admin_us": 0, 00:13:25.553 "keep_alive_timeout_ms": 10000, 00:13:25.553 "arbitration_burst": 0, 00:13:25.553 "low_priority_weight": 0, 00:13:25.553 "medium_priority_weight": 0, 00:13:25.553 "high_priority_weight": 0, 00:13:25.553 "nvme_adminq_poll_period_us": 10000, 00:13:25.553 "nvme_ioq_poll_period_us": 0, 00:13:25.553 "io_queue_requests": 512, 00:13:25.553 "delay_cmd_submit": true, 00:13:25.553 "transport_retry_count": 4, 00:13:25.553 "bdev_retry_count": 3, 00:13:25.553 "transport_ack_timeout": 0, 00:13:25.553 "ctrlr_loss_timeout_sec": 0, 00:13:25.553 "reconnect_delay_sec": 0, 00:13:25.553 "fast_io_fail_timeout_sec": 0, 00:13:25.553 "disable_auto_failback": false, 00:13:25.553 "generate_uuids": false, 00:13:25.553 "transport_tos": 0, 00:13:25.553 "nvme_error_stat": false, 00:13:25.553 "rdma_srq_size": 0, 00:13:25.553 "io_path_stat": false, 00:13:25.553 "allow_accel_sequence": false, 00:13:25.553 "rdma_max_cq_size": 0, 00:13:25.553 "rdma_cm_event_timeout_ms": 0, 00:13:25.553 "dhchap_digests": [ 00:13:25.553 "sha256", 00:13:25.553 "sha384", 00:13:25.553 "sha512" 00:13:25.553 ], 00:13:25.553 "dhchap_dhgroups": [ 00:13:25.553 "null", 00:13:25.553 "ffdhe2048", 00:13:25.553 "ffdhe3072", 00:13:25.553 "ffdhe4096", 00:13:25.553 "ffdhe6144", 00:13:25.553 "ffdhe8192" 00:13:25.553 ] 00:13:25.553 } 00:13:25.553 }, 00:13:25.553 { 00:13:25.553 "method": "bdev_nvme_attach_controller", 00:13:25.553 "params": { 00:13:25.553 "name": "nvme0", 00:13:25.553 "trtype": "TCP", 00:13:25.553 "adrfam": "IPv4", 00:13:25.553 "traddr": "10.0.0.2", 00:13:25.553 "trsvcid": "4420", 00:13:25.553 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:25.553 "prchk_reftag": false, 00:13:25.553 "prchk_guard": false, 00:13:25.553 "ctrlr_loss_timeout_sec": 0, 00:13:25.553 "reconnect_delay_sec": 0, 00:13:25.553 "fast_io_fail_timeout_sec": 0, 00:13:25.553 "psk": "key0", 00:13:25.553 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:25.553 "hdgst": false, 00:13:25.553 "ddgst": false 00:13:25.553 } 00:13:25.553 }, 00:13:25.553 { 00:13:25.553 "method": "bdev_nvme_set_hotplug", 00:13:25.553 "params": { 00:13:25.553 "period_us": 100000, 00:13:25.553 "enable": false 00:13:25.553 } 00:13:25.553 }, 00:13:25.553 { 00:13:25.553 "method": "bdev_enable_histogram", 00:13:25.553 "params": { 00:13:25.553 "name": "nvme0n1", 00:13:25.553 "enable": true 00:13:25.553 } 00:13:25.553 }, 00:13:25.553 { 00:13:25.553 "method": "bdev_wait_for_examine" 00:13:25.553 } 00:13:25.553 ] 00:13:25.553 }, 00:13:25.553 { 00:13:25.553 "subsystem": "nbd", 00:13:25.553 "config": [] 00:13:25.553 } 00:13:25.553 ] 00:13:25.553 }' 00:13:25.553 [2024-07-24 21:35:10.393763] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:13:25.553 [2024-07-24 21:35:10.393846] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73050 ] 00:13:25.553 [2024-07-24 21:35:10.534896] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:25.811 [2024-07-24 21:35:10.644354] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:25.811 [2024-07-24 21:35:10.797922] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:26.069 [2024-07-24 21:35:10.849698] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:26.327 21:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:26.327 21:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:13:26.327 21:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:13:26.327 21:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # jq -r '.[].name' 00:13:26.585 21:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:26.585 21:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@278 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:26.844 Running I/O for 1 seconds... 00:13:27.778 00:13:27.778 Latency(us) 00:13:27.778 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:27.779 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:27.779 Verification LBA range: start 0x0 length 0x2000 00:13:27.779 nvme0n1 : 1.02 4265.57 16.66 0.00 0.00 29706.14 6672.76 18230.92 00:13:27.779 =================================================================================================================== 00:13:27.779 Total : 4265.57 16.66 0.00 0.00 29706.14 6672.76 18230.92 00:13:27.779 0 00:13:27.779 21:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # trap - SIGINT SIGTERM EXIT 00:13:27.779 21:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@281 -- # cleanup 00:13:27.779 21:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:13:27.779 21:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # type=--id 00:13:27.779 21:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@809 -- # id=0 00:13:27.779 21:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:13:27.779 21:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:13:27.779 21:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:13:27.779 21:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:13:27.779 21:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # for n in $shm_files 00:13:27.779 21:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:13:27.779 nvmf_trace.0 00:13:28.037 21:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # return 0 00:13:28.037 21:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 73050 00:13:28.037 21:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 73050 ']' 00:13:28.037 21:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 73050 00:13:28.037 21:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:13:28.037 21:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:28.037 21:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73050 00:13:28.037 killing process with pid 73050 00:13:28.037 Received shutdown signal, test time was about 1.000000 seconds 00:13:28.037 00:13:28.037 Latency(us) 00:13:28.037 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:28.037 =================================================================================================================== 00:13:28.037 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:28.037 21:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:13:28.037 21:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:13:28.037 21:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73050' 00:13:28.037 21:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 73050 00:13:28.037 21:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 73050 00:13:28.298 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:13:28.298 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:28.298 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:13:28.298 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:28.298 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:13:28.298 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:28.298 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:28.298 rmmod nvme_tcp 00:13:28.298 rmmod nvme_fabrics 00:13:28.298 rmmod nvme_keyring 00:13:28.298 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:28.298 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:13:28.298 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:13:28.298 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 73018 ']' 00:13:28.298 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 73018 00:13:28.298 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 73018 ']' 00:13:28.298 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 73018 00:13:28.298 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:13:28.298 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:28.298 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73018 00:13:28.298 killing process with pid 73018 00:13:28.298 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:28.298 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:28.298 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73018' 00:13:28.298 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 73018 00:13:28.298 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 73018 00:13:28.555 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:28.555 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:28.555 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:28.555 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:28.555 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:28.555 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:28.555 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:28.555 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:28.555 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:13:28.555 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.9IBgsrBZ5G /tmp/tmp.5XfT8Zl2vi /tmp/tmp.djUwK7BrNP 00:13:28.555 00:13:28.555 real 1m21.608s 00:13:28.555 user 2m2.602s 00:13:28.555 sys 0m29.813s 00:13:28.555 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:28.555 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:28.555 ************************************ 00:13:28.555 END TEST nvmf_tls 00:13:28.555 ************************************ 00:13:28.555 21:35:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:13:28.555 21:35:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:28.555 21:35:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:28.555 21:35:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:28.555 ************************************ 00:13:28.555 START TEST nvmf_fips 00:13:28.555 ************************************ 00:13:28.555 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:13:28.814 * Looking for test storage... 00:13:28.814 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:13:28.814 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:28.814 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:13:28.814 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:28.814 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:28.814 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:28.814 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:28.814 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:28.814 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:28.814 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:28.814 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:28.814 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:28.814 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:28.814 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:13:28.814 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:13:28.814 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:28.814 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:28.814 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:28.814 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:28.814 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:28.814 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:28.814 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:28.814 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:28.814 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:28.814 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:28.814 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:28.814 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:13:28.814 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:28.814 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:13:28.814 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:28.814 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:28.815 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:28.815 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:28.815 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:28.815 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:28.815 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:28.815 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:28.815 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:28.815 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:13:28.815 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:13:28.815 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:13:28.815 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:13:28.815 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:13:28.815 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:13:28.815 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:13:28.815 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:13:28.815 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:13:28.815 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:13:28.815 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:13:28.815 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:13:28.815 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:13:28.815 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:13:28.815 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:13:28.815 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:13:28.815 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:13:28.815 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:13:28.815 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:13:28.815 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:28.815 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:13:28.815 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:13:28.815 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:13:28.815 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:13:28.815 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:13:28.815 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:13:28.815 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:13:28.815 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:13:28.815 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:13:28.815 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:13:28.815 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:13:28.815 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:13:28.815 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:13:28.815 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:28.815 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:13:28.815 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:13:28.815 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:13:28.815 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:13:28.815 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:13:28.815 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:13:28.815 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:13:28.815 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:13:28.815 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:13:28.815 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:13:28.815 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:13:28.815 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:13:28.815 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:13:28.815 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:28.815 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:13:28.815 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:13:28.815 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:13:28.815 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:13:28.815 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:13:28.815 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:13:28.815 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:13:28.815 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:13:28.815 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:13:28.815 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:13:28.815 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:13:28.815 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:13:28.815 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:13:28.815 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:13:28.815 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:13:28.815 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:13:28.815 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:13:28.815 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:13:28.815 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:13:28.815 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:13:28.815 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@37 -- # cat 00:13:28.815 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:13:28.815 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:13:28.815 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:13:28.815 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:13:28.815 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:13:28.815 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:13:28.815 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:13:28.815 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:13:28.815 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:13:28.815 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:13:28.815 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:13:28.815 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # : 00:13:28.815 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:13:28.815 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:13:28.815 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:13:28.815 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:28.815 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:13:28.815 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:28.815 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:13:29.074 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:29.074 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:13:29.074 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:13:29.074 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:13:29.074 Error setting digest 00:13:29.074 0092B829F87F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:13:29.074 0092B829F87F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:13:29.074 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:13:29.074 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:29.074 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:29.074 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:29.074 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:13:29.074 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:29.074 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:29.074 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:29.074 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:29.074 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:29.074 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:29.074 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:29.074 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:29.074 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:13:29.074 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:13:29.074 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:13:29.074 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:13:29.074 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:13:29.074 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # nvmf_veth_init 00:13:29.074 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:29.074 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:29.074 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:29.074 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:29.074 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:29.074 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:29.074 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:29.074 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:29.074 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:29.074 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:29.074 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:29.074 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:29.074 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:29.074 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:29.074 Cannot find device "nvmf_tgt_br" 00:13:29.074 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@155 -- # true 00:13:29.074 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:29.074 Cannot find device "nvmf_tgt_br2" 00:13:29.074 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # true 00:13:29.074 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:29.074 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:29.074 Cannot find device "nvmf_tgt_br" 00:13:29.074 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # true 00:13:29.074 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:29.074 Cannot find device "nvmf_tgt_br2" 00:13:29.074 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # true 00:13:29.074 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:29.074 21:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:29.074 21:35:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:29.074 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:29.074 21:35:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # true 00:13:29.074 21:35:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:29.074 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:29.074 21:35:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # true 00:13:29.074 21:35:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:29.074 21:35:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:29.074 21:35:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:29.074 21:35:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:29.074 21:35:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:29.074 21:35:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:29.366 21:35:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:29.366 21:35:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:29.366 21:35:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:29.366 21:35:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:29.366 21:35:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:29.366 21:35:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:29.366 21:35:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:29.366 21:35:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:29.366 21:35:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:29.366 21:35:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:29.366 21:35:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:29.366 21:35:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:29.366 21:35:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:29.366 21:35:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:29.366 21:35:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:29.366 21:35:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:29.366 21:35:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:29.366 21:35:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:29.366 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:29.366 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:13:29.366 00:13:29.366 --- 10.0.0.2 ping statistics --- 00:13:29.366 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:29.366 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:13:29.366 21:35:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:29.366 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:29.366 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:13:29.366 00:13:29.366 --- 10.0.0.3 ping statistics --- 00:13:29.366 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:29.366 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:13:29.366 21:35:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:29.366 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:29.366 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:13:29.366 00:13:29.366 --- 10.0.0.1 ping statistics --- 00:13:29.366 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:29.366 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:13:29.366 21:35:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:29.366 21:35:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@433 -- # return 0 00:13:29.366 21:35:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:29.366 21:35:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:29.366 21:35:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:29.366 21:35:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:29.366 21:35:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:29.366 21:35:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:29.366 21:35:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:29.366 21:35:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:13:29.366 21:35:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:29.366 21:35:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:29.366 21:35:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:13:29.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:29.366 21:35:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=73320 00:13:29.366 21:35:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 73320 00:13:29.366 21:35:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 73320 ']' 00:13:29.366 21:35:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:29.366 21:35:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:29.366 21:35:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:29.366 21:35:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:29.366 21:35:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:29.366 21:35:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:13:29.366 [2024-07-24 21:35:14.336322] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:13:29.367 [2024-07-24 21:35:14.336401] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:29.625 [2024-07-24 21:35:14.478990] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:29.625 [2024-07-24 21:35:14.580588] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:29.625 [2024-07-24 21:35:14.580678] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:29.625 [2024-07-24 21:35:14.580698] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:29.625 [2024-07-24 21:35:14.580709] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:29.625 [2024-07-24 21:35:14.580718] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:29.625 [2024-07-24 21:35:14.580756] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:29.883 [2024-07-24 21:35:14.655229] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:30.449 21:35:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:30.449 21:35:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:13:30.449 21:35:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:30.449 21:35:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:30.449 21:35:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:13:30.449 21:35:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:30.449 21:35:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:13:30.449 21:35:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:13:30.449 21:35:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:13:30.449 21:35:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:13:30.449 21:35:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:13:30.449 21:35:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:13:30.449 21:35:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:13:30.449 21:35:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:30.708 [2024-07-24 21:35:15.518847] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:30.708 [2024-07-24 21:35:15.534821] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:30.708 [2024-07-24 21:35:15.535020] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:30.708 [2024-07-24 21:35:15.568376] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:13:30.708 malloc0 00:13:30.708 21:35:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:30.708 21:35:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=73354 00:13:30.708 21:35:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:30.708 21:35:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 73354 /var/tmp/bdevperf.sock 00:13:30.708 21:35:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 73354 ']' 00:13:30.708 21:35:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:30.708 21:35:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:30.708 21:35:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:30.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:30.708 21:35:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:30.708 21:35:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:13:30.708 [2024-07-24 21:35:15.673172] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:13:30.708 [2024-07-24 21:35:15.673451] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73354 ] 00:13:30.966 [2024-07-24 21:35:15.815446] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:30.966 [2024-07-24 21:35:15.921736] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:31.224 [2024-07-24 21:35:15.978947] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:31.792 21:35:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:31.792 21:35:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:13:31.792 21:35:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:13:31.792 [2024-07-24 21:35:16.687324] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:31.792 [2024-07-24 21:35:16.687451] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:13:31.792 TLSTESTn1 00:13:31.792 21:35:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:32.050 Running I/O for 10 seconds... 00:13:42.024 00:13:42.024 Latency(us) 00:13:42.024 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:42.024 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:13:42.024 Verification LBA range: start 0x0 length 0x2000 00:13:42.024 TLSTESTn1 : 10.02 4448.48 17.38 0.00 0.00 28723.69 5213.09 21924.77 00:13:42.024 =================================================================================================================== 00:13:42.024 Total : 4448.48 17.38 0.00 0.00 28723.69 5213.09 21924.77 00:13:42.024 0 00:13:42.024 21:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:13:42.024 21:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:13:42.024 21:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # type=--id 00:13:42.024 21:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@809 -- # id=0 00:13:42.024 21:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:13:42.024 21:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:13:42.024 21:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:13:42.024 21:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:13:42.024 21:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # for n in $shm_files 00:13:42.024 21:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:13:42.024 nvmf_trace.0 00:13:42.024 21:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # return 0 00:13:42.024 21:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 73354 00:13:42.024 21:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 73354 ']' 00:13:42.024 21:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 73354 00:13:42.024 21:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:13:42.283 21:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:42.283 21:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73354 00:13:42.283 killing process with pid 73354 00:13:42.283 Received shutdown signal, test time was about 10.000000 seconds 00:13:42.283 00:13:42.283 Latency(us) 00:13:42.283 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:42.283 =================================================================================================================== 00:13:42.283 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:42.283 21:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:13:42.283 21:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:13:42.283 21:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73354' 00:13:42.283 21:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 73354 00:13:42.283 [2024-07-24 21:35:27.047059] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:13:42.283 21:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 73354 00:13:42.283 21:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:13:42.283 21:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:42.283 21:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:13:42.542 21:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:42.542 21:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:13:42.542 21:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:42.542 21:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:42.542 rmmod nvme_tcp 00:13:42.542 rmmod nvme_fabrics 00:13:42.542 rmmod nvme_keyring 00:13:42.542 21:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:42.542 21:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:13:42.542 21:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:13:42.542 21:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 73320 ']' 00:13:42.542 21:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 73320 00:13:42.542 21:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 73320 ']' 00:13:42.542 21:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 73320 00:13:42.542 21:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:13:42.542 21:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:42.542 21:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73320 00:13:42.542 killing process with pid 73320 00:13:42.542 21:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:13:42.542 21:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:13:42.542 21:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73320' 00:13:42.542 21:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 73320 00:13:42.542 [2024-07-24 21:35:27.358329] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:13:42.542 21:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 73320 00:13:42.801 21:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:42.801 21:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:42.801 21:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:42.801 21:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:42.801 21:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:42.801 21:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:42.801 21:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:42.801 21:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:42.801 21:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:13:42.801 21:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:13:42.801 ************************************ 00:13:42.801 END TEST nvmf_fips 00:13:42.801 ************************************ 00:13:42.801 00:13:42.801 real 0m14.127s 00:13:42.801 user 0m18.032s 00:13:42.801 sys 0m6.334s 00:13:42.801 21:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:42.801 21:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:13:42.801 21:35:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@45 -- # '[' 0 -eq 1 ']' 00:13:42.801 21:35:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@51 -- # [[ virt == phy ]] 00:13:42.801 21:35:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@66 -- # trap - SIGINT SIGTERM EXIT 00:13:42.801 ************************************ 00:13:42.801 END TEST nvmf_target_extra 00:13:42.801 ************************************ 00:13:42.801 00:13:42.801 real 4m1.258s 00:13:42.801 user 8m11.121s 00:13:42.801 sys 0m59.726s 00:13:42.801 21:35:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:42.801 21:35:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:42.801 21:35:27 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:13:42.801 21:35:27 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:42.801 21:35:27 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:42.801 21:35:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:42.801 ************************************ 00:13:42.801 START TEST nvmf_host 00:13:42.801 ************************************ 00:13:42.801 21:35:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:13:43.061 * Looking for test storage... 00:13:43.061 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:13:43.061 21:35:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:43.061 21:35:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:13:43.061 21:35:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:43.061 21:35:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:43.061 21:35:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:43.061 21:35:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:43.061 21:35:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:43.061 21:35:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:43.061 21:35:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:43.061 21:35:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:43.061 21:35:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:43.061 21:35:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:43.061 21:35:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:13:43.061 21:35:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:13:43.061 21:35:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:43.061 21:35:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:43.061 21:35:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:43.061 21:35:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:43.061 21:35:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:43.061 21:35:27 nvmf_tcp.nvmf_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:43.061 21:35:27 nvmf_tcp.nvmf_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:43.061 21:35:27 nvmf_tcp.nvmf_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:43.061 21:35:27 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.061 21:35:27 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.061 21:35:27 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.061 21:35:27 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:13:43.061 21:35:27 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.061 21:35:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@47 -- # : 0 00:13:43.061 21:35:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:43.061 21:35:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:43.061 21:35:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:43.061 21:35:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:43.061 21:35:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:43.061 21:35:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:43.061 21:35:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:43.061 21:35:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:43.061 21:35:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:13:43.061 21:35:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:13:43.061 21:35:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 1 -eq 0 ]] 00:13:43.061 21:35:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:13:43.061 21:35:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:43.061 21:35:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:43.061 21:35:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:13:43.061 ************************************ 00:13:43.061 START TEST nvmf_identify 00:13:43.061 ************************************ 00:13:43.061 21:35:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:13:43.061 * Looking for test storage... 00:13:43.061 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:13:43.061 21:35:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:43.061 21:35:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:13:43.061 21:35:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:43.061 21:35:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:43.061 21:35:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:43.061 21:35:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:43.061 21:35:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:43.061 21:35:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:43.061 21:35:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:43.061 21:35:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:43.061 21:35:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:43.061 21:35:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:43.061 21:35:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:13:43.061 21:35:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:13:43.061 21:35:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:43.061 21:35:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:43.061 21:35:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:43.061 21:35:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:43.061 21:35:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:43.061 21:35:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:43.061 21:35:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:43.061 21:35:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:43.061 21:35:27 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.061 21:35:27 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.061 21:35:27 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.061 21:35:27 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:13:43.061 21:35:27 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.061 21:35:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:13:43.061 21:35:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:43.061 21:35:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:43.061 21:35:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:43.061 21:35:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:43.061 21:35:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:43.061 21:35:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:43.061 21:35:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:43.061 21:35:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:43.061 21:35:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:43.061 21:35:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:43.061 21:35:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:13:43.061 21:35:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:43.061 21:35:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:43.061 21:35:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:43.061 21:35:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:43.061 21:35:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:43.061 21:35:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:43.061 21:35:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:43.061 21:35:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:43.061 21:35:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:13:43.061 21:35:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:13:43.061 21:35:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:13:43.061 21:35:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:13:43.061 21:35:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:13:43.061 21:35:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # nvmf_veth_init 00:13:43.061 21:35:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:43.061 21:35:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:43.061 21:35:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:43.061 21:35:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:43.061 21:35:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:43.061 21:35:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:43.061 21:35:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:43.061 21:35:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:43.061 21:35:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:43.061 21:35:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:43.061 21:35:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:43.061 21:35:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:43.061 21:35:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:43.061 21:35:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:43.061 Cannot find device "nvmf_tgt_br" 00:13:43.061 21:35:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@155 -- # true 00:13:43.061 21:35:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:43.061 Cannot find device "nvmf_tgt_br2" 00:13:43.061 21:35:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # true 00:13:43.061 21:35:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:43.061 21:35:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:43.061 Cannot find device "nvmf_tgt_br" 00:13:43.061 21:35:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # true 00:13:43.061 21:35:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:43.061 Cannot find device "nvmf_tgt_br2" 00:13:43.061 21:35:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # true 00:13:43.061 21:35:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:43.320 21:35:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:43.320 21:35:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:43.320 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:43.320 21:35:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # true 00:13:43.320 21:35:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:43.320 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:43.320 21:35:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # true 00:13:43.320 21:35:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:43.320 21:35:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:43.320 21:35:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:43.320 21:35:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:43.320 21:35:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:43.320 21:35:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:43.320 21:35:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:43.320 21:35:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:43.320 21:35:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:43.320 21:35:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:43.320 21:35:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:43.320 21:35:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:43.320 21:35:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:43.320 21:35:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:43.320 21:35:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:43.320 21:35:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:43.320 21:35:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:43.320 21:35:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:43.320 21:35:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:43.320 21:35:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:43.320 21:35:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:43.320 21:35:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:43.320 21:35:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:43.320 21:35:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:43.320 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:43.320 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:13:43.320 00:13:43.320 --- 10.0.0.2 ping statistics --- 00:13:43.320 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:43.320 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:13:43.320 21:35:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:43.320 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:43.320 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:13:43.320 00:13:43.320 --- 10.0.0.3 ping statistics --- 00:13:43.320 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:43.320 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:13:43.320 21:35:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:43.320 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:43.320 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:13:43.320 00:13:43.320 --- 10.0.0.1 ping statistics --- 00:13:43.320 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:43.320 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:13:43.579 21:35:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:43.579 21:35:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@433 -- # return 0 00:13:43.579 21:35:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:43.579 21:35:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:43.579 21:35:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:43.579 21:35:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:43.579 21:35:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:43.579 21:35:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:43.579 21:35:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:43.579 21:35:28 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:13:43.579 21:35:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:43.579 21:35:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:13:43.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:43.579 21:35:28 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=73727 00:13:43.579 21:35:28 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:43.579 21:35:28 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 73727 00:13:43.579 21:35:28 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:43.579 21:35:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@831 -- # '[' -z 73727 ']' 00:13:43.579 21:35:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:43.579 21:35:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:43.579 21:35:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:43.579 21:35:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:43.579 21:35:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:13:43.579 [2024-07-24 21:35:28.406191] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:13:43.579 [2024-07-24 21:35:28.406451] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:43.579 [2024-07-24 21:35:28.549355] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:43.837 [2024-07-24 21:35:28.693330] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:43.837 [2024-07-24 21:35:28.693698] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:43.837 [2024-07-24 21:35:28.693905] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:43.837 [2024-07-24 21:35:28.694109] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:43.837 [2024-07-24 21:35:28.694377] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:43.837 [2024-07-24 21:35:28.694730] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:43.837 [2024-07-24 21:35:28.694830] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:43.837 [2024-07-24 21:35:28.695058] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:43.837 [2024-07-24 21:35:28.694910] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:43.837 [2024-07-24 21:35:28.757344] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:44.403 21:35:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:44.403 21:35:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # return 0 00:13:44.403 21:35:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:44.403 21:35:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.403 21:35:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:13:44.403 [2024-07-24 21:35:29.343944] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:44.403 21:35:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.403 21:35:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:13:44.403 21:35:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:44.403 21:35:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:13:44.662 21:35:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:44.662 21:35:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.662 21:35:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:13:44.662 Malloc0 00:13:44.662 21:35:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.662 21:35:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:44.662 21:35:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.662 21:35:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:13:44.662 21:35:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.662 21:35:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:13:44.662 21:35:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.662 21:35:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:13:44.662 21:35:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.662 21:35:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:44.662 21:35:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.662 21:35:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:13:44.662 [2024-07-24 21:35:29.457346] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:44.662 21:35:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.662 21:35:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:44.662 21:35:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.662 21:35:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:13:44.662 21:35:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.662 21:35:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:13:44.662 21:35:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.662 21:35:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:13:44.662 [ 00:13:44.662 { 00:13:44.662 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:44.662 "subtype": "Discovery", 00:13:44.662 "listen_addresses": [ 00:13:44.662 { 00:13:44.662 "trtype": "TCP", 00:13:44.662 "adrfam": "IPv4", 00:13:44.662 "traddr": "10.0.0.2", 00:13:44.662 "trsvcid": "4420" 00:13:44.662 } 00:13:44.662 ], 00:13:44.662 "allow_any_host": true, 00:13:44.662 "hosts": [] 00:13:44.662 }, 00:13:44.662 { 00:13:44.662 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:44.662 "subtype": "NVMe", 00:13:44.662 "listen_addresses": [ 00:13:44.662 { 00:13:44.662 "trtype": "TCP", 00:13:44.662 "adrfam": "IPv4", 00:13:44.662 "traddr": "10.0.0.2", 00:13:44.662 "trsvcid": "4420" 00:13:44.662 } 00:13:44.662 ], 00:13:44.662 "allow_any_host": true, 00:13:44.662 "hosts": [], 00:13:44.662 "serial_number": "SPDK00000000000001", 00:13:44.662 "model_number": "SPDK bdev Controller", 00:13:44.662 "max_namespaces": 32, 00:13:44.662 "min_cntlid": 1, 00:13:44.662 "max_cntlid": 65519, 00:13:44.662 "namespaces": [ 00:13:44.662 { 00:13:44.662 "nsid": 1, 00:13:44.662 "bdev_name": "Malloc0", 00:13:44.662 "name": "Malloc0", 00:13:44.662 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:13:44.662 "eui64": "ABCDEF0123456789", 00:13:44.662 "uuid": "2c9f2428-baea-4fb5-b448-884232d21acb" 00:13:44.662 } 00:13:44.662 ] 00:13:44.662 } 00:13:44.662 ] 00:13:44.662 21:35:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.662 21:35:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:13:44.662 [2024-07-24 21:35:29.515929] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:13:44.662 [2024-07-24 21:35:29.516160] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73762 ] 00:13:44.662 [2024-07-24 21:35:29.660806] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:13:44.662 [2024-07-24 21:35:29.660901] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:13:44.662 [2024-07-24 21:35:29.660909] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:13:44.662 [2024-07-24 21:35:29.660922] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:13:44.662 [2024-07-24 21:35:29.660934] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:13:44.662 [2024-07-24 21:35:29.661107] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:13:44.662 [2024-07-24 21:35:29.661159] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x75c2c0 0 00:13:44.928 [2024-07-24 21:35:29.667653] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:13:44.928 [2024-07-24 21:35:29.667677] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:13:44.928 [2024-07-24 21:35:29.667684] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:13:44.928 [2024-07-24 21:35:29.667687] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:13:44.928 [2024-07-24 21:35:29.667737] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.928 [2024-07-24 21:35:29.667751] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.928 [2024-07-24 21:35:29.667755] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x75c2c0) 00:13:44.928 [2024-07-24 21:35:29.667782] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:13:44.928 [2024-07-24 21:35:29.667813] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x79d940, cid 0, qid 0 00:13:44.928 [2024-07-24 21:35:29.675646] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.928 [2024-07-24 21:35:29.675668] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.928 [2024-07-24 21:35:29.675673] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.928 [2024-07-24 21:35:29.675679] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x79d940) on tqpair=0x75c2c0 00:13:44.928 [2024-07-24 21:35:29.675691] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:13:44.928 [2024-07-24 21:35:29.675700] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:13:44.928 [2024-07-24 21:35:29.675706] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:13:44.928 [2024-07-24 21:35:29.675726] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.928 [2024-07-24 21:35:29.675731] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.928 [2024-07-24 21:35:29.675735] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x75c2c0) 00:13:44.928 [2024-07-24 21:35:29.675745] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.928 [2024-07-24 21:35:29.675773] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x79d940, cid 0, qid 0 00:13:44.928 [2024-07-24 21:35:29.675866] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.928 [2024-07-24 21:35:29.675873] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.928 [2024-07-24 21:35:29.675877] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.928 [2024-07-24 21:35:29.675882] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x79d940) on tqpair=0x75c2c0 00:13:44.928 [2024-07-24 21:35:29.675888] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:13:44.928 [2024-07-24 21:35:29.675895] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:13:44.928 [2024-07-24 21:35:29.675903] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.928 [2024-07-24 21:35:29.675908] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.928 [2024-07-24 21:35:29.675912] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x75c2c0) 00:13:44.928 [2024-07-24 21:35:29.675919] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.928 [2024-07-24 21:35:29.675939] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x79d940, cid 0, qid 0 00:13:44.928 [2024-07-24 21:35:29.676001] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.928 [2024-07-24 21:35:29.676008] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.928 [2024-07-24 21:35:29.676012] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.928 [2024-07-24 21:35:29.676016] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x79d940) on tqpair=0x75c2c0 00:13:44.928 [2024-07-24 21:35:29.676023] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:13:44.928 [2024-07-24 21:35:29.676032] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:13:44.928 [2024-07-24 21:35:29.676039] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.928 [2024-07-24 21:35:29.676043] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.928 [2024-07-24 21:35:29.676047] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x75c2c0) 00:13:44.928 [2024-07-24 21:35:29.676055] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.928 [2024-07-24 21:35:29.676074] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x79d940, cid 0, qid 0 00:13:44.928 [2024-07-24 21:35:29.676124] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.928 [2024-07-24 21:35:29.676131] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.928 [2024-07-24 21:35:29.676135] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.928 [2024-07-24 21:35:29.676139] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x79d940) on tqpair=0x75c2c0 00:13:44.928 [2024-07-24 21:35:29.676144] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:13:44.928 [2024-07-24 21:35:29.676155] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.928 [2024-07-24 21:35:29.676159] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.928 [2024-07-24 21:35:29.676163] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x75c2c0) 00:13:44.928 [2024-07-24 21:35:29.676171] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.928 [2024-07-24 21:35:29.676190] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x79d940, cid 0, qid 0 00:13:44.928 [2024-07-24 21:35:29.676249] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.928 [2024-07-24 21:35:29.676255] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.928 [2024-07-24 21:35:29.676259] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.928 [2024-07-24 21:35:29.676263] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x79d940) on tqpair=0x75c2c0 00:13:44.928 [2024-07-24 21:35:29.676269] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:13:44.928 [2024-07-24 21:35:29.676274] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:13:44.928 [2024-07-24 21:35:29.676282] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:13:44.928 [2024-07-24 21:35:29.676387] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:13:44.928 [2024-07-24 21:35:29.676393] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:13:44.928 [2024-07-24 21:35:29.676403] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.928 [2024-07-24 21:35:29.676407] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.928 [2024-07-24 21:35:29.676411] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x75c2c0) 00:13:44.928 [2024-07-24 21:35:29.676419] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.928 [2024-07-24 21:35:29.676437] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x79d940, cid 0, qid 0 00:13:44.928 [2024-07-24 21:35:29.676493] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.928 [2024-07-24 21:35:29.676500] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.928 [2024-07-24 21:35:29.676504] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.928 [2024-07-24 21:35:29.676508] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x79d940) on tqpair=0x75c2c0 00:13:44.928 [2024-07-24 21:35:29.676514] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:13:44.928 [2024-07-24 21:35:29.676524] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.928 [2024-07-24 21:35:29.676529] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.928 [2024-07-24 21:35:29.676533] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x75c2c0) 00:13:44.928 [2024-07-24 21:35:29.676540] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.928 [2024-07-24 21:35:29.676558] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x79d940, cid 0, qid 0 00:13:44.928 [2024-07-24 21:35:29.676636] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.928 [2024-07-24 21:35:29.676645] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.928 [2024-07-24 21:35:29.676649] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.928 [2024-07-24 21:35:29.676653] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x79d940) on tqpair=0x75c2c0 00:13:44.928 [2024-07-24 21:35:29.676658] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:13:44.929 [2024-07-24 21:35:29.676663] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:13:44.929 [2024-07-24 21:35:29.676678] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:13:44.929 [2024-07-24 21:35:29.676688] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:13:44.929 [2024-07-24 21:35:29.676700] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.929 [2024-07-24 21:35:29.676704] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x75c2c0) 00:13:44.929 [2024-07-24 21:35:29.676714] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.929 [2024-07-24 21:35:29.676735] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x79d940, cid 0, qid 0 00:13:44.929 [2024-07-24 21:35:29.676893] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:44.929 [2024-07-24 21:35:29.676900] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:44.929 [2024-07-24 21:35:29.676904] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:44.929 [2024-07-24 21:35:29.676909] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x75c2c0): datao=0, datal=4096, cccid=0 00:13:44.929 [2024-07-24 21:35:29.676914] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x79d940) on tqpair(0x75c2c0): expected_datao=0, payload_size=4096 00:13:44.929 [2024-07-24 21:35:29.676919] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.929 [2024-07-24 21:35:29.676928] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:44.929 [2024-07-24 21:35:29.676932] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:44.929 [2024-07-24 21:35:29.676941] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.929 [2024-07-24 21:35:29.676947] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.929 [2024-07-24 21:35:29.676951] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.929 [2024-07-24 21:35:29.676955] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x79d940) on tqpair=0x75c2c0 00:13:44.929 [2024-07-24 21:35:29.676964] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:13:44.929 [2024-07-24 21:35:29.676970] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:13:44.929 [2024-07-24 21:35:29.676975] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:13:44.929 [2024-07-24 21:35:29.676985] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:13:44.929 [2024-07-24 21:35:29.676991] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:13:44.929 [2024-07-24 21:35:29.676996] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:13:44.929 [2024-07-24 21:35:29.677005] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:13:44.929 [2024-07-24 21:35:29.677013] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.929 [2024-07-24 21:35:29.677018] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.929 [2024-07-24 21:35:29.677029] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x75c2c0) 00:13:44.929 [2024-07-24 21:35:29.677037] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:44.929 [2024-07-24 21:35:29.677057] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x79d940, cid 0, qid 0 00:13:44.929 [2024-07-24 21:35:29.677127] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.929 [2024-07-24 21:35:29.677134] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.929 [2024-07-24 21:35:29.677138] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.929 [2024-07-24 21:35:29.677142] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x79d940) on tqpair=0x75c2c0 00:13:44.929 [2024-07-24 21:35:29.677151] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.929 [2024-07-24 21:35:29.677155] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.929 [2024-07-24 21:35:29.677159] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x75c2c0) 00:13:44.929 [2024-07-24 21:35:29.677166] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:13:44.929 [2024-07-24 21:35:29.677173] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.929 [2024-07-24 21:35:29.677177] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.929 [2024-07-24 21:35:29.677181] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x75c2c0) 00:13:44.929 [2024-07-24 21:35:29.677187] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:13:44.929 [2024-07-24 21:35:29.677193] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.929 [2024-07-24 21:35:29.677197] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.929 [2024-07-24 21:35:29.677201] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x75c2c0) 00:13:44.929 [2024-07-24 21:35:29.677207] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:13:44.929 [2024-07-24 21:35:29.677214] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.929 [2024-07-24 21:35:29.677218] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.929 [2024-07-24 21:35:29.677221] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x75c2c0) 00:13:44.929 [2024-07-24 21:35:29.677227] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:13:44.929 [2024-07-24 21:35:29.677233] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:13:44.929 [2024-07-24 21:35:29.677242] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:13:44.929 [2024-07-24 21:35:29.677249] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.929 [2024-07-24 21:35:29.677253] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x75c2c0) 00:13:44.929 [2024-07-24 21:35:29.677260] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.929 [2024-07-24 21:35:29.677285] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x79d940, cid 0, qid 0 00:13:44.929 [2024-07-24 21:35:29.677293] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x79dac0, cid 1, qid 0 00:13:44.929 [2024-07-24 21:35:29.677298] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x79dc40, cid 2, qid 0 00:13:44.929 [2024-07-24 21:35:29.677302] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x79ddc0, cid 3, qid 0 00:13:44.929 [2024-07-24 21:35:29.677307] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x79df40, cid 4, qid 0 00:13:44.929 [2024-07-24 21:35:29.677420] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.929 [2024-07-24 21:35:29.677427] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.929 [2024-07-24 21:35:29.677431] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.929 [2024-07-24 21:35:29.677435] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x79df40) on tqpair=0x75c2c0 00:13:44.929 [2024-07-24 21:35:29.677440] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:13:44.929 [2024-07-24 21:35:29.677446] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:13:44.929 [2024-07-24 21:35:29.677458] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.929 [2024-07-24 21:35:29.677463] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x75c2c0) 00:13:44.929 [2024-07-24 21:35:29.677470] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.929 [2024-07-24 21:35:29.677489] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x79df40, cid 4, qid 0 00:13:44.929 [2024-07-24 21:35:29.677562] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:44.929 [2024-07-24 21:35:29.677570] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:44.929 [2024-07-24 21:35:29.677573] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:44.929 [2024-07-24 21:35:29.677577] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x75c2c0): datao=0, datal=4096, cccid=4 00:13:44.929 [2024-07-24 21:35:29.677582] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x79df40) on tqpair(0x75c2c0): expected_datao=0, payload_size=4096 00:13:44.929 [2024-07-24 21:35:29.677587] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.929 [2024-07-24 21:35:29.677594] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:44.929 [2024-07-24 21:35:29.677599] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:44.929 [2024-07-24 21:35:29.677632] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.929 [2024-07-24 21:35:29.677641] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.929 [2024-07-24 21:35:29.677645] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.929 [2024-07-24 21:35:29.677649] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x79df40) on tqpair=0x75c2c0 00:13:44.929 [2024-07-24 21:35:29.677664] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:13:44.929 [2024-07-24 21:35:29.677692] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.929 [2024-07-24 21:35:29.677699] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x75c2c0) 00:13:44.929 [2024-07-24 21:35:29.677707] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.929 [2024-07-24 21:35:29.677715] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.929 [2024-07-24 21:35:29.677719] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.929 [2024-07-24 21:35:29.677723] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x75c2c0) 00:13:44.929 [2024-07-24 21:35:29.677729] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:13:44.929 [2024-07-24 21:35:29.677766] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x79df40, cid 4, qid 0 00:13:44.929 [2024-07-24 21:35:29.677774] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x79e0c0, cid 5, qid 0 00:13:44.929 [2024-07-24 21:35:29.677911] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:44.930 [2024-07-24 21:35:29.677918] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:44.930 [2024-07-24 21:35:29.677922] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:44.930 [2024-07-24 21:35:29.677926] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x75c2c0): datao=0, datal=1024, cccid=4 00:13:44.930 [2024-07-24 21:35:29.677931] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x79df40) on tqpair(0x75c2c0): expected_datao=0, payload_size=1024 00:13:44.930 [2024-07-24 21:35:29.677936] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.930 [2024-07-24 21:35:29.677943] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:44.930 [2024-07-24 21:35:29.677947] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:44.930 [2024-07-24 21:35:29.677953] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.930 [2024-07-24 21:35:29.677959] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.930 [2024-07-24 21:35:29.677963] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.930 [2024-07-24 21:35:29.677967] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x79e0c0) on tqpair=0x75c2c0 00:13:44.930 [2024-07-24 21:35:29.677985] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.930 [2024-07-24 21:35:29.677993] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.930 [2024-07-24 21:35:29.677997] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.930 [2024-07-24 21:35:29.678001] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x79df40) on tqpair=0x75c2c0 00:13:44.930 [2024-07-24 21:35:29.678014] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.930 [2024-07-24 21:35:29.678019] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x75c2c0) 00:13:44.930 [2024-07-24 21:35:29.678027] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.930 [2024-07-24 21:35:29.678052] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x79df40, cid 4, qid 0 00:13:44.930 [2024-07-24 21:35:29.678127] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:44.930 [2024-07-24 21:35:29.678135] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:44.930 [2024-07-24 21:35:29.678139] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:44.930 [2024-07-24 21:35:29.678142] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x75c2c0): datao=0, datal=3072, cccid=4 00:13:44.930 [2024-07-24 21:35:29.678147] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x79df40) on tqpair(0x75c2c0): expected_datao=0, payload_size=3072 00:13:44.930 [2024-07-24 21:35:29.678152] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.930 [2024-07-24 21:35:29.678159] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:44.930 [2024-07-24 21:35:29.678163] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:44.930 [2024-07-24 21:35:29.678172] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.930 [2024-07-24 21:35:29.678178] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.930 [2024-07-24 21:35:29.678182] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.930 [2024-07-24 21:35:29.678186] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x79df40) on tqpair=0x75c2c0 00:13:44.930 [2024-07-24 21:35:29.678197] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.930 [2024-07-24 21:35:29.678202] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x75c2c0) 00:13:44.930 [2024-07-24 21:35:29.678209] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.930 [2024-07-24 21:35:29.678234] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x79df40, cid 4, qid 0 00:13:44.930 [2024-07-24 21:35:29.678338] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:44.930 [2024-07-24 21:35:29.678345] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:44.930 [2024-07-24 21:35:29.678348] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:44.930 [2024-07-24 21:35:29.678352] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x75c2c0): datao=0, datal=8, cccid=4 00:13:44.930 [2024-07-24 21:35:29.678357] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x79df40) on tqpair(0x75c2c0): expected_datao=0, payload_size=8 00:13:44.930 [2024-07-24 21:35:29.678362] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.930 [2024-07-24 21:35:29.678368] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:44.930 [2024-07-24 21:35:29.678372] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:44.930 [2024-07-24 21:35:29.678389] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.930 [2024-07-24 21:35:29.678396] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.930 [2024-07-24 21:35:29.678400] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.930 [2024-07-24 21:35:29.678404] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x79df40) on tqpair=0x75c2c0 00:13:44.930 ===================================================== 00:13:44.930 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:13:44.930 ===================================================== 00:13:44.930 Controller Capabilities/Features 00:13:44.930 ================================ 00:13:44.930 Vendor ID: 0000 00:13:44.930 Subsystem Vendor ID: 0000 00:13:44.930 Serial Number: .................... 00:13:44.930 Model Number: ........................................ 00:13:44.930 Firmware Version: 24.09 00:13:44.930 Recommended Arb Burst: 0 00:13:44.930 IEEE OUI Identifier: 00 00 00 00:13:44.930 Multi-path I/O 00:13:44.930 May have multiple subsystem ports: No 00:13:44.930 May have multiple controllers: No 00:13:44.930 Associated with SR-IOV VF: No 00:13:44.930 Max Data Transfer Size: 131072 00:13:44.930 Max Number of Namespaces: 0 00:13:44.930 Max Number of I/O Queues: 1024 00:13:44.930 NVMe Specification Version (VS): 1.3 00:13:44.930 NVMe Specification Version (Identify): 1.3 00:13:44.930 Maximum Queue Entries: 128 00:13:44.930 Contiguous Queues Required: Yes 00:13:44.930 Arbitration Mechanisms Supported 00:13:44.930 Weighted Round Robin: Not Supported 00:13:44.930 Vendor Specific: Not Supported 00:13:44.930 Reset Timeout: 15000 ms 00:13:44.930 Doorbell Stride: 4 bytes 00:13:44.930 NVM Subsystem Reset: Not Supported 00:13:44.930 Command Sets Supported 00:13:44.930 NVM Command Set: Supported 00:13:44.930 Boot Partition: Not Supported 00:13:44.930 Memory Page Size Minimum: 4096 bytes 00:13:44.930 Memory Page Size Maximum: 4096 bytes 00:13:44.930 Persistent Memory Region: Not Supported 00:13:44.930 Optional Asynchronous Events Supported 00:13:44.930 Namespace Attribute Notices: Not Supported 00:13:44.930 Firmware Activation Notices: Not Supported 00:13:44.930 ANA Change Notices: Not Supported 00:13:44.930 PLE Aggregate Log Change Notices: Not Supported 00:13:44.930 LBA Status Info Alert Notices: Not Supported 00:13:44.930 EGE Aggregate Log Change Notices: Not Supported 00:13:44.930 Normal NVM Subsystem Shutdown event: Not Supported 00:13:44.930 Zone Descriptor Change Notices: Not Supported 00:13:44.930 Discovery Log Change Notices: Supported 00:13:44.930 Controller Attributes 00:13:44.930 128-bit Host Identifier: Not Supported 00:13:44.930 Non-Operational Permissive Mode: Not Supported 00:13:44.930 NVM Sets: Not Supported 00:13:44.930 Read Recovery Levels: Not Supported 00:13:44.930 Endurance Groups: Not Supported 00:13:44.930 Predictable Latency Mode: Not Supported 00:13:44.930 Traffic Based Keep ALive: Not Supported 00:13:44.930 Namespace Granularity: Not Supported 00:13:44.930 SQ Associations: Not Supported 00:13:44.930 UUID List: Not Supported 00:13:44.930 Multi-Domain Subsystem: Not Supported 00:13:44.930 Fixed Capacity Management: Not Supported 00:13:44.930 Variable Capacity Management: Not Supported 00:13:44.930 Delete Endurance Group: Not Supported 00:13:44.930 Delete NVM Set: Not Supported 00:13:44.930 Extended LBA Formats Supported: Not Supported 00:13:44.930 Flexible Data Placement Supported: Not Supported 00:13:44.930 00:13:44.930 Controller Memory Buffer Support 00:13:44.930 ================================ 00:13:44.930 Supported: No 00:13:44.930 00:13:44.930 Persistent Memory Region Support 00:13:44.930 ================================ 00:13:44.930 Supported: No 00:13:44.930 00:13:44.930 Admin Command Set Attributes 00:13:44.930 ============================ 00:13:44.930 Security Send/Receive: Not Supported 00:13:44.930 Format NVM: Not Supported 00:13:44.930 Firmware Activate/Download: Not Supported 00:13:44.930 Namespace Management: Not Supported 00:13:44.930 Device Self-Test: Not Supported 00:13:44.930 Directives: Not Supported 00:13:44.930 NVMe-MI: Not Supported 00:13:44.930 Virtualization Management: Not Supported 00:13:44.930 Doorbell Buffer Config: Not Supported 00:13:44.930 Get LBA Status Capability: Not Supported 00:13:44.930 Command & Feature Lockdown Capability: Not Supported 00:13:44.930 Abort Command Limit: 1 00:13:44.930 Async Event Request Limit: 4 00:13:44.931 Number of Firmware Slots: N/A 00:13:44.931 Firmware Slot 1 Read-Only: N/A 00:13:44.931 Firmware Activation Without Reset: N/A 00:13:44.931 Multiple Update Detection Support: N/A 00:13:44.931 Firmware Update Granularity: No Information Provided 00:13:44.931 Per-Namespace SMART Log: No 00:13:44.931 Asymmetric Namespace Access Log Page: Not Supported 00:13:44.931 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:13:44.931 Command Effects Log Page: Not Supported 00:13:44.931 Get Log Page Extended Data: Supported 00:13:44.931 Telemetry Log Pages: Not Supported 00:13:44.931 Persistent Event Log Pages: Not Supported 00:13:44.931 Supported Log Pages Log Page: May Support 00:13:44.931 Commands Supported & Effects Log Page: Not Supported 00:13:44.931 Feature Identifiers & Effects Log Page:May Support 00:13:44.931 NVMe-MI Commands & Effects Log Page: May Support 00:13:44.931 Data Area 4 for Telemetry Log: Not Supported 00:13:44.931 Error Log Page Entries Supported: 128 00:13:44.931 Keep Alive: Not Supported 00:13:44.931 00:13:44.931 NVM Command Set Attributes 00:13:44.931 ========================== 00:13:44.931 Submission Queue Entry Size 00:13:44.931 Max: 1 00:13:44.931 Min: 1 00:13:44.931 Completion Queue Entry Size 00:13:44.931 Max: 1 00:13:44.931 Min: 1 00:13:44.931 Number of Namespaces: 0 00:13:44.931 Compare Command: Not Supported 00:13:44.931 Write Uncorrectable Command: Not Supported 00:13:44.931 Dataset Management Command: Not Supported 00:13:44.931 Write Zeroes Command: Not Supported 00:13:44.931 Set Features Save Field: Not Supported 00:13:44.931 Reservations: Not Supported 00:13:44.931 Timestamp: Not Supported 00:13:44.931 Copy: Not Supported 00:13:44.931 Volatile Write Cache: Not Present 00:13:44.931 Atomic Write Unit (Normal): 1 00:13:44.931 Atomic Write Unit (PFail): 1 00:13:44.931 Atomic Compare & Write Unit: 1 00:13:44.931 Fused Compare & Write: Supported 00:13:44.931 Scatter-Gather List 00:13:44.931 SGL Command Set: Supported 00:13:44.931 SGL Keyed: Supported 00:13:44.931 SGL Bit Bucket Descriptor: Not Supported 00:13:44.931 SGL Metadata Pointer: Not Supported 00:13:44.931 Oversized SGL: Not Supported 00:13:44.931 SGL Metadata Address: Not Supported 00:13:44.931 SGL Offset: Supported 00:13:44.931 Transport SGL Data Block: Not Supported 00:13:44.931 Replay Protected Memory Block: Not Supported 00:13:44.931 00:13:44.931 Firmware Slot Information 00:13:44.931 ========================= 00:13:44.931 Active slot: 0 00:13:44.931 00:13:44.931 00:13:44.931 Error Log 00:13:44.931 ========= 00:13:44.931 00:13:44.931 Active Namespaces 00:13:44.931 ================= 00:13:44.931 Discovery Log Page 00:13:44.931 ================== 00:13:44.931 Generation Counter: 2 00:13:44.931 Number of Records: 2 00:13:44.931 Record Format: 0 00:13:44.931 00:13:44.931 Discovery Log Entry 0 00:13:44.931 ---------------------- 00:13:44.931 Transport Type: 3 (TCP) 00:13:44.931 Address Family: 1 (IPv4) 00:13:44.931 Subsystem Type: 3 (Current Discovery Subsystem) 00:13:44.931 Entry Flags: 00:13:44.931 Duplicate Returned Information: 1 00:13:44.931 Explicit Persistent Connection Support for Discovery: 1 00:13:44.931 Transport Requirements: 00:13:44.931 Secure Channel: Not Required 00:13:44.931 Port ID: 0 (0x0000) 00:13:44.931 Controller ID: 65535 (0xffff) 00:13:44.931 Admin Max SQ Size: 128 00:13:44.931 Transport Service Identifier: 4420 00:13:44.931 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:13:44.931 Transport Address: 10.0.0.2 00:13:44.931 Discovery Log Entry 1 00:13:44.931 ---------------------- 00:13:44.931 Transport Type: 3 (TCP) 00:13:44.931 Address Family: 1 (IPv4) 00:13:44.931 Subsystem Type: 2 (NVM Subsystem) 00:13:44.931 Entry Flags: 00:13:44.931 Duplicate Returned Information: 0 00:13:44.931 Explicit Persistent Connection Support for Discovery: 0 00:13:44.931 Transport Requirements: 00:13:44.931 Secure Channel: Not Required 00:13:44.931 Port ID: 0 (0x0000) 00:13:44.931 Controller ID: 65535 (0xffff) 00:13:44.931 Admin Max SQ Size: 128 00:13:44.931 Transport Service Identifier: 4420 00:13:44.931 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:13:44.931 Transport Address: 10.0.0.2 [2024-07-24 21:35:29.678509] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:13:44.931 [2024-07-24 21:35:29.678524] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x79d940) on tqpair=0x75c2c0 00:13:44.931 [2024-07-24 21:35:29.678531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:44.931 [2024-07-24 21:35:29.678537] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x79dac0) on tqpair=0x75c2c0 00:13:44.931 [2024-07-24 21:35:29.678542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:44.931 [2024-07-24 21:35:29.678547] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x79dc40) on tqpair=0x75c2c0 00:13:44.931 [2024-07-24 21:35:29.678552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:44.931 [2024-07-24 21:35:29.678557] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x79ddc0) on tqpair=0x75c2c0 00:13:44.931 [2024-07-24 21:35:29.678562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:44.931 [2024-07-24 21:35:29.678572] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.931 [2024-07-24 21:35:29.678576] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.931 [2024-07-24 21:35:29.678580] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x75c2c0) 00:13:44.931 [2024-07-24 21:35:29.678588] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.931 [2024-07-24 21:35:29.678611] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x79ddc0, cid 3, qid 0 00:13:44.931 [2024-07-24 21:35:29.678676] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.931 [2024-07-24 21:35:29.678685] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.931 [2024-07-24 21:35:29.678700] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.931 [2024-07-24 21:35:29.678706] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x79ddc0) on tqpair=0x75c2c0 00:13:44.931 [2024-07-24 21:35:29.678720] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.931 [2024-07-24 21:35:29.678725] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.931 [2024-07-24 21:35:29.678729] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x75c2c0) 00:13:44.931 [2024-07-24 21:35:29.678736] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.931 [2024-07-24 21:35:29.678762] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x79ddc0, cid 3, qid 0 00:13:44.931 [2024-07-24 21:35:29.678830] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.931 [2024-07-24 21:35:29.678837] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.931 [2024-07-24 21:35:29.678841] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.931 [2024-07-24 21:35:29.678845] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x79ddc0) on tqpair=0x75c2c0 00:13:44.931 [2024-07-24 21:35:29.678850] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:13:44.931 [2024-07-24 21:35:29.678855] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:13:44.931 [2024-07-24 21:35:29.678865] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.931 [2024-07-24 21:35:29.678870] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.931 [2024-07-24 21:35:29.678874] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x75c2c0) 00:13:44.931 [2024-07-24 21:35:29.678881] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.931 [2024-07-24 21:35:29.678899] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x79ddc0, cid 3, qid 0 00:13:44.931 [2024-07-24 21:35:29.678949] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.931 [2024-07-24 21:35:29.678956] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.931 [2024-07-24 21:35:29.678959] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.931 [2024-07-24 21:35:29.678964] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x79ddc0) on tqpair=0x75c2c0 00:13:44.931 [2024-07-24 21:35:29.678975] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.931 [2024-07-24 21:35:29.678980] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.932 [2024-07-24 21:35:29.678984] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x75c2c0) 00:13:44.932 [2024-07-24 21:35:29.678991] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.932 [2024-07-24 21:35:29.679009] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x79ddc0, cid 3, qid 0 00:13:44.932 [2024-07-24 21:35:29.679071] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.932 [2024-07-24 21:35:29.679078] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.932 [2024-07-24 21:35:29.679081] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.932 [2024-07-24 21:35:29.679086] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x79ddc0) on tqpair=0x75c2c0 00:13:44.932 [2024-07-24 21:35:29.679096] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.932 [2024-07-24 21:35:29.679101] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.932 [2024-07-24 21:35:29.679105] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x75c2c0) 00:13:44.932 [2024-07-24 21:35:29.679112] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.932 [2024-07-24 21:35:29.679129] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x79ddc0, cid 3, qid 0 00:13:44.932 [2024-07-24 21:35:29.679209] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.932 [2024-07-24 21:35:29.679216] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.932 [2024-07-24 21:35:29.679219] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.932 [2024-07-24 21:35:29.679224] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x79ddc0) on tqpair=0x75c2c0 00:13:44.932 [2024-07-24 21:35:29.679234] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.932 [2024-07-24 21:35:29.679239] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.932 [2024-07-24 21:35:29.679243] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x75c2c0) 00:13:44.932 [2024-07-24 21:35:29.679250] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.932 [2024-07-24 21:35:29.679267] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x79ddc0, cid 3, qid 0 00:13:44.932 [2024-07-24 21:35:29.679311] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.932 [2024-07-24 21:35:29.679318] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.932 [2024-07-24 21:35:29.679322] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.932 [2024-07-24 21:35:29.679326] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x79ddc0) on tqpair=0x75c2c0 00:13:44.932 [2024-07-24 21:35:29.679337] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.932 [2024-07-24 21:35:29.679341] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.932 [2024-07-24 21:35:29.679345] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x75c2c0) 00:13:44.932 [2024-07-24 21:35:29.679353] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.932 [2024-07-24 21:35:29.679370] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x79ddc0, cid 3, qid 0 00:13:44.932 [2024-07-24 21:35:29.679416] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.932 [2024-07-24 21:35:29.679423] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.932 [2024-07-24 21:35:29.679427] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.932 [2024-07-24 21:35:29.679431] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x79ddc0) on tqpair=0x75c2c0 00:13:44.932 [2024-07-24 21:35:29.679441] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.932 [2024-07-24 21:35:29.679446] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.932 [2024-07-24 21:35:29.679450] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x75c2c0) 00:13:44.932 [2024-07-24 21:35:29.679457] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.932 [2024-07-24 21:35:29.679474] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x79ddc0, cid 3, qid 0 00:13:44.932 [2024-07-24 21:35:29.679537] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.932 [2024-07-24 21:35:29.679544] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.932 [2024-07-24 21:35:29.679548] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.932 [2024-07-24 21:35:29.679552] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x79ddc0) on tqpair=0x75c2c0 00:13:44.932 [2024-07-24 21:35:29.679563] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.932 [2024-07-24 21:35:29.679567] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.932 [2024-07-24 21:35:29.679571] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x75c2c0) 00:13:44.932 [2024-07-24 21:35:29.679578] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.932 [2024-07-24 21:35:29.679596] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x79ddc0, cid 3, qid 0 00:13:44.932 [2024-07-24 21:35:29.683642] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.932 [2024-07-24 21:35:29.683664] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.932 [2024-07-24 21:35:29.683669] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.932 [2024-07-24 21:35:29.683674] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x79ddc0) on tqpair=0x75c2c0 00:13:44.932 [2024-07-24 21:35:29.683688] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.932 [2024-07-24 21:35:29.683694] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.932 [2024-07-24 21:35:29.683698] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x75c2c0) 00:13:44.932 [2024-07-24 21:35:29.683707] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.932 [2024-07-24 21:35:29.683732] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x79ddc0, cid 3, qid 0 00:13:44.932 [2024-07-24 21:35:29.683782] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.932 [2024-07-24 21:35:29.683789] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.932 [2024-07-24 21:35:29.683793] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.932 [2024-07-24 21:35:29.683797] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x79ddc0) on tqpair=0x75c2c0 00:13:44.932 [2024-07-24 21:35:29.683805] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 4 milliseconds 00:13:44.932 00:13:44.932 21:35:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:13:44.932 [2024-07-24 21:35:29.730187] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:13:44.932 [2024-07-24 21:35:29.730244] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73764 ] 00:13:44.932 [2024-07-24 21:35:29.874875] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:13:44.932 [2024-07-24 21:35:29.874992] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:13:44.932 [2024-07-24 21:35:29.875014] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:13:44.932 [2024-07-24 21:35:29.875041] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:13:44.932 [2024-07-24 21:35:29.875070] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:13:44.932 [2024-07-24 21:35:29.875256] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:13:44.932 [2024-07-24 21:35:29.875332] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x24612c0 0 00:13:44.932 [2024-07-24 21:35:29.896774] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:13:44.932 [2024-07-24 21:35:29.896798] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:13:44.932 [2024-07-24 21:35:29.896820] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:13:44.932 [2024-07-24 21:35:29.896824] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:13:44.932 [2024-07-24 21:35:29.896926] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.932 [2024-07-24 21:35:29.896933] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.932 [2024-07-24 21:35:29.896937] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x24612c0) 00:13:44.932 [2024-07-24 21:35:29.896952] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:13:44.932 [2024-07-24 21:35:29.897012] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24a2940, cid 0, qid 0 00:13:44.932 [2024-07-24 21:35:29.904772] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.932 [2024-07-24 21:35:29.904793] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.932 [2024-07-24 21:35:29.904815] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.932 [2024-07-24 21:35:29.904820] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24a2940) on tqpair=0x24612c0 00:13:44.932 [2024-07-24 21:35:29.904846] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:13:44.932 [2024-07-24 21:35:29.904865] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:13:44.932 [2024-07-24 21:35:29.904883] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:13:44.932 [2024-07-24 21:35:29.904927] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.932 [2024-07-24 21:35:29.904932] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.932 [2024-07-24 21:35:29.904936] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x24612c0) 00:13:44.932 [2024-07-24 21:35:29.904945] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.932 [2024-07-24 21:35:29.904971] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24a2940, cid 0, qid 0 00:13:44.932 [2024-07-24 21:35:29.905066] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.932 [2024-07-24 21:35:29.905073] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.932 [2024-07-24 21:35:29.905077] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.934 [2024-07-24 21:35:29.905082] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24a2940) on tqpair=0x24612c0 00:13:44.934 [2024-07-24 21:35:29.905087] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:13:44.934 [2024-07-24 21:35:29.905096] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:13:44.934 [2024-07-24 21:35:29.905104] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.934 [2024-07-24 21:35:29.905108] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.934 [2024-07-24 21:35:29.905112] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x24612c0) 00:13:44.934 [2024-07-24 21:35:29.905120] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.934 [2024-07-24 21:35:29.905139] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24a2940, cid 0, qid 0 00:13:44.934 [2024-07-24 21:35:29.905207] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.934 [2024-07-24 21:35:29.905214] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.934 [2024-07-24 21:35:29.905218] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.934 [2024-07-24 21:35:29.905223] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24a2940) on tqpair=0x24612c0 00:13:44.934 [2024-07-24 21:35:29.905229] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:13:44.934 [2024-07-24 21:35:29.905239] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:13:44.934 [2024-07-24 21:35:29.905247] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.934 [2024-07-24 21:35:29.905251] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.934 [2024-07-24 21:35:29.905255] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x24612c0) 00:13:44.934 [2024-07-24 21:35:29.905263] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.934 [2024-07-24 21:35:29.905281] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24a2940, cid 0, qid 0 00:13:44.934 [2024-07-24 21:35:29.905364] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.934 [2024-07-24 21:35:29.905385] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.934 [2024-07-24 21:35:29.905391] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.934 [2024-07-24 21:35:29.905395] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24a2940) on tqpair=0x24612c0 00:13:44.934 [2024-07-24 21:35:29.905401] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:13:44.934 [2024-07-24 21:35:29.905412] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.934 [2024-07-24 21:35:29.905418] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.934 [2024-07-24 21:35:29.905422] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x24612c0) 00:13:44.934 [2024-07-24 21:35:29.905430] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.934 [2024-07-24 21:35:29.905449] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24a2940, cid 0, qid 0 00:13:44.934 [2024-07-24 21:35:29.905503] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.934 [2024-07-24 21:35:29.905519] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.934 [2024-07-24 21:35:29.905525] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.934 [2024-07-24 21:35:29.905529] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24a2940) on tqpair=0x24612c0 00:13:44.934 [2024-07-24 21:35:29.905535] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:13:44.934 [2024-07-24 21:35:29.905540] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:13:44.934 [2024-07-24 21:35:29.905549] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:13:44.934 [2024-07-24 21:35:29.905655] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:13:44.934 [2024-07-24 21:35:29.905674] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:13:44.934 [2024-07-24 21:35:29.905685] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.934 [2024-07-24 21:35:29.905690] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.934 [2024-07-24 21:35:29.905694] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x24612c0) 00:13:44.934 [2024-07-24 21:35:29.905701] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.934 [2024-07-24 21:35:29.905722] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24a2940, cid 0, qid 0 00:13:44.934 [2024-07-24 21:35:29.905838] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.934 [2024-07-24 21:35:29.905850] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.934 [2024-07-24 21:35:29.905855] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.934 [2024-07-24 21:35:29.905859] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24a2940) on tqpair=0x24612c0 00:13:44.934 [2024-07-24 21:35:29.905864] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:13:44.934 [2024-07-24 21:35:29.905875] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.934 [2024-07-24 21:35:29.905879] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.934 [2024-07-24 21:35:29.905883] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x24612c0) 00:13:44.934 [2024-07-24 21:35:29.905890] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.934 [2024-07-24 21:35:29.905909] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24a2940, cid 0, qid 0 00:13:44.934 [2024-07-24 21:35:29.905960] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.934 [2024-07-24 21:35:29.905967] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.934 [2024-07-24 21:35:29.905971] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.934 [2024-07-24 21:35:29.905975] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24a2940) on tqpair=0x24612c0 00:13:44.934 [2024-07-24 21:35:29.905980] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:13:44.934 [2024-07-24 21:35:29.905985] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:13:44.934 [2024-07-24 21:35:29.905993] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:13:44.934 [2024-07-24 21:35:29.906004] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:13:44.934 [2024-07-24 21:35:29.906014] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.934 [2024-07-24 21:35:29.906034] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x24612c0) 00:13:44.934 [2024-07-24 21:35:29.906042] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.934 [2024-07-24 21:35:29.906069] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24a2940, cid 0, qid 0 00:13:44.934 [2024-07-24 21:35:29.906173] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:44.934 [2024-07-24 21:35:29.906181] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:44.934 [2024-07-24 21:35:29.906185] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:44.934 [2024-07-24 21:35:29.906189] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x24612c0): datao=0, datal=4096, cccid=0 00:13:44.934 [2024-07-24 21:35:29.906194] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24a2940) on tqpair(0x24612c0): expected_datao=0, payload_size=4096 00:13:44.934 [2024-07-24 21:35:29.906200] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.934 [2024-07-24 21:35:29.906208] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:44.934 [2024-07-24 21:35:29.906213] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:44.934 [2024-07-24 21:35:29.906222] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.934 [2024-07-24 21:35:29.906229] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.934 [2024-07-24 21:35:29.906233] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.934 [2024-07-24 21:35:29.906237] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24a2940) on tqpair=0x24612c0 00:13:44.934 [2024-07-24 21:35:29.906246] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:13:44.935 [2024-07-24 21:35:29.906251] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:13:44.935 [2024-07-24 21:35:29.906256] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:13:44.935 [2024-07-24 21:35:29.906266] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:13:44.935 [2024-07-24 21:35:29.906272] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:13:44.935 [2024-07-24 21:35:29.906277] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:13:44.935 [2024-07-24 21:35:29.906287] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:13:44.935 [2024-07-24 21:35:29.906295] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.935 [2024-07-24 21:35:29.906300] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.935 [2024-07-24 21:35:29.906303] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x24612c0) 00:13:44.935 [2024-07-24 21:35:29.906311] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:44.935 [2024-07-24 21:35:29.906339] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24a2940, cid 0, qid 0 00:13:44.935 [2024-07-24 21:35:29.906393] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.935 [2024-07-24 21:35:29.906400] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.935 [2024-07-24 21:35:29.906405] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.935 [2024-07-24 21:35:29.906409] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24a2940) on tqpair=0x24612c0 00:13:44.935 [2024-07-24 21:35:29.906418] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.935 [2024-07-24 21:35:29.906422] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.935 [2024-07-24 21:35:29.906426] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x24612c0) 00:13:44.935 [2024-07-24 21:35:29.906433] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:13:44.935 [2024-07-24 21:35:29.906439] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.935 [2024-07-24 21:35:29.906443] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.935 [2024-07-24 21:35:29.906447] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x24612c0) 00:13:44.935 [2024-07-24 21:35:29.906453] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:13:44.935 [2024-07-24 21:35:29.906460] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.935 [2024-07-24 21:35:29.906464] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.935 [2024-07-24 21:35:29.906468] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x24612c0) 00:13:44.935 [2024-07-24 21:35:29.906474] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:13:44.935 [2024-07-24 21:35:29.906480] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.935 [2024-07-24 21:35:29.906484] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.935 [2024-07-24 21:35:29.906488] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x24612c0) 00:13:44.935 [2024-07-24 21:35:29.906494] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:13:44.935 [2024-07-24 21:35:29.906499] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:13:44.935 [2024-07-24 21:35:29.906508] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:13:44.935 [2024-07-24 21:35:29.906515] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.935 [2024-07-24 21:35:29.906519] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x24612c0) 00:13:44.935 [2024-07-24 21:35:29.906526] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.935 [2024-07-24 21:35:29.906551] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24a2940, cid 0, qid 0 00:13:44.935 [2024-07-24 21:35:29.906559] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24a2ac0, cid 1, qid 0 00:13:44.935 [2024-07-24 21:35:29.906564] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24a2c40, cid 2, qid 0 00:13:44.935 [2024-07-24 21:35:29.906569] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24a2dc0, cid 3, qid 0 00:13:44.935 [2024-07-24 21:35:29.906574] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24a2f40, cid 4, qid 0 00:13:44.935 [2024-07-24 21:35:29.906662] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.935 [2024-07-24 21:35:29.906672] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.935 [2024-07-24 21:35:29.906676] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.935 [2024-07-24 21:35:29.906680] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24a2f40) on tqpair=0x24612c0 00:13:44.935 [2024-07-24 21:35:29.906686] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:13:44.935 [2024-07-24 21:35:29.906692] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:13:44.935 [2024-07-24 21:35:29.906722] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:13:44.935 [2024-07-24 21:35:29.906729] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:13:44.935 [2024-07-24 21:35:29.906736] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.935 [2024-07-24 21:35:29.906741] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.935 [2024-07-24 21:35:29.906745] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x24612c0) 00:13:44.935 [2024-07-24 21:35:29.906752] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:44.935 [2024-07-24 21:35:29.906773] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24a2f40, cid 4, qid 0 00:13:44.935 [2024-07-24 21:35:29.906841] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.935 [2024-07-24 21:35:29.906848] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.935 [2024-07-24 21:35:29.906852] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.935 [2024-07-24 21:35:29.906856] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24a2f40) on tqpair=0x24612c0 00:13:44.935 [2024-07-24 21:35:29.906924] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:13:44.935 [2024-07-24 21:35:29.906935] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:13:44.935 [2024-07-24 21:35:29.906944] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.935 [2024-07-24 21:35:29.906949] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x24612c0) 00:13:44.935 [2024-07-24 21:35:29.906956] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.935 [2024-07-24 21:35:29.906975] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24a2f40, cid 4, qid 0 00:13:44.935 [2024-07-24 21:35:29.907077] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:44.935 [2024-07-24 21:35:29.907092] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:44.935 [2024-07-24 21:35:29.907098] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:44.935 [2024-07-24 21:35:29.907102] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x24612c0): datao=0, datal=4096, cccid=4 00:13:44.935 [2024-07-24 21:35:29.907107] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24a2f40) on tqpair(0x24612c0): expected_datao=0, payload_size=4096 00:13:44.935 [2024-07-24 21:35:29.907112] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.935 [2024-07-24 21:35:29.907119] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:44.935 [2024-07-24 21:35:29.907124] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:44.935 [2024-07-24 21:35:29.907134] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.935 [2024-07-24 21:35:29.907140] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.935 [2024-07-24 21:35:29.907144] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.935 [2024-07-24 21:35:29.907149] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24a2f40) on tqpair=0x24612c0 00:13:44.935 [2024-07-24 21:35:29.907160] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:13:44.935 [2024-07-24 21:35:29.907172] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:13:44.935 [2024-07-24 21:35:29.907183] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:13:44.935 [2024-07-24 21:35:29.907192] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.935 [2024-07-24 21:35:29.907196] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x24612c0) 00:13:44.935 [2024-07-24 21:35:29.907204] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.935 [2024-07-24 21:35:29.907224] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24a2f40, cid 4, qid 0 00:13:44.935 [2024-07-24 21:35:29.907309] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:44.935 [2024-07-24 21:35:29.907321] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:44.935 [2024-07-24 21:35:29.907326] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:44.935 [2024-07-24 21:35:29.907330] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x24612c0): datao=0, datal=4096, cccid=4 00:13:44.935 [2024-07-24 21:35:29.907335] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24a2f40) on tqpair(0x24612c0): expected_datao=0, payload_size=4096 00:13:44.935 [2024-07-24 21:35:29.907340] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.935 [2024-07-24 21:35:29.907362] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:44.935 [2024-07-24 21:35:29.907366] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:44.935 [2024-07-24 21:35:29.907385] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.935 [2024-07-24 21:35:29.907392] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.935 [2024-07-24 21:35:29.907395] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.935 [2024-07-24 21:35:29.907399] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24a2f40) on tqpair=0x24612c0 00:13:44.936 [2024-07-24 21:35:29.907415] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:13:44.936 [2024-07-24 21:35:29.907426] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:13:44.936 [2024-07-24 21:35:29.907434] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.936 [2024-07-24 21:35:29.907439] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x24612c0) 00:13:44.936 [2024-07-24 21:35:29.907446] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.936 [2024-07-24 21:35:29.907481] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24a2f40, cid 4, qid 0 00:13:44.936 [2024-07-24 21:35:29.907543] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:44.936 [2024-07-24 21:35:29.907550] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:44.936 [2024-07-24 21:35:29.907554] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:44.936 [2024-07-24 21:35:29.907558] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x24612c0): datao=0, datal=4096, cccid=4 00:13:44.936 [2024-07-24 21:35:29.907563] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24a2f40) on tqpair(0x24612c0): expected_datao=0, payload_size=4096 00:13:44.936 [2024-07-24 21:35:29.907568] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.936 [2024-07-24 21:35:29.907575] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:44.936 [2024-07-24 21:35:29.907579] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:44.936 [2024-07-24 21:35:29.907588] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.936 [2024-07-24 21:35:29.907595] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.936 [2024-07-24 21:35:29.907599] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.936 [2024-07-24 21:35:29.907603] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24a2f40) on tqpair=0x24612c0 00:13:44.936 [2024-07-24 21:35:29.907612] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:13:44.936 [2024-07-24 21:35:29.907621] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:13:44.936 [2024-07-24 21:35:29.907632] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:13:44.936 [2024-07-24 21:35:29.907638] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:13:44.936 [2024-07-24 21:35:29.907644] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:13:44.936 [2024-07-24 21:35:29.907665] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:13:44.936 [2024-07-24 21:35:29.907673] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:13:44.936 [2024-07-24 21:35:29.907678] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:13:44.936 [2024-07-24 21:35:29.907684] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:13:44.936 [2024-07-24 21:35:29.907702] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.936 [2024-07-24 21:35:29.907707] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x24612c0) 00:13:44.936 [2024-07-24 21:35:29.907714] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.936 [2024-07-24 21:35:29.907722] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.936 [2024-07-24 21:35:29.907726] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.936 [2024-07-24 21:35:29.907730] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x24612c0) 00:13:44.936 [2024-07-24 21:35:29.907736] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:13:44.936 [2024-07-24 21:35:29.907762] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24a2f40, cid 4, qid 0 00:13:44.936 [2024-07-24 21:35:29.907770] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24a30c0, cid 5, qid 0 00:13:44.936 [2024-07-24 21:35:29.907837] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.936 [2024-07-24 21:35:29.907844] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.936 [2024-07-24 21:35:29.907848] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.936 [2024-07-24 21:35:29.907852] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24a2f40) on tqpair=0x24612c0 00:13:44.936 [2024-07-24 21:35:29.907859] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.936 [2024-07-24 21:35:29.907866] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.936 [2024-07-24 21:35:29.907869] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.936 [2024-07-24 21:35:29.907874] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24a30c0) on tqpair=0x24612c0 00:13:44.936 [2024-07-24 21:35:29.907884] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.936 [2024-07-24 21:35:29.907889] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x24612c0) 00:13:44.936 [2024-07-24 21:35:29.907896] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.936 [2024-07-24 21:35:29.907925] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24a30c0, cid 5, qid 0 00:13:44.936 [2024-07-24 21:35:29.907980] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.936 [2024-07-24 21:35:29.907989] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.936 [2024-07-24 21:35:29.908008] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.936 [2024-07-24 21:35:29.908013] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24a30c0) on tqpair=0x24612c0 00:13:44.936 [2024-07-24 21:35:29.908039] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.936 [2024-07-24 21:35:29.908044] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x24612c0) 00:13:44.936 [2024-07-24 21:35:29.908051] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.936 [2024-07-24 21:35:29.908068] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24a30c0, cid 5, qid 0 00:13:44.936 [2024-07-24 21:35:29.908121] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.936 [2024-07-24 21:35:29.908133] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.936 [2024-07-24 21:35:29.908137] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.936 [2024-07-24 21:35:29.908142] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24a30c0) on tqpair=0x24612c0 00:13:44.936 [2024-07-24 21:35:29.908153] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.936 [2024-07-24 21:35:29.908157] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x24612c0) 00:13:44.936 [2024-07-24 21:35:29.908165] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.936 [2024-07-24 21:35:29.908182] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24a30c0, cid 5, qid 0 00:13:44.936 [2024-07-24 21:35:29.908235] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.936 [2024-07-24 21:35:29.908242] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.936 [2024-07-24 21:35:29.908246] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.936 [2024-07-24 21:35:29.908250] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24a30c0) on tqpair=0x24612c0 00:13:44.936 [2024-07-24 21:35:29.908269] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.936 [2024-07-24 21:35:29.908284] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x24612c0) 00:13:44.936 [2024-07-24 21:35:29.908291] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.936 [2024-07-24 21:35:29.908300] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.936 [2024-07-24 21:35:29.908304] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x24612c0) 00:13:44.936 [2024-07-24 21:35:29.908311] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.936 [2024-07-24 21:35:29.908318] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.936 [2024-07-24 21:35:29.908322] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x24612c0) 00:13:44.936 [2024-07-24 21:35:29.908329] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.936 [2024-07-24 21:35:29.908337] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.936 [2024-07-24 21:35:29.908341] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x24612c0) 00:13:44.936 [2024-07-24 21:35:29.908348] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.936 [2024-07-24 21:35:29.908368] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24a30c0, cid 5, qid 0 00:13:44.936 [2024-07-24 21:35:29.908375] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24a2f40, cid 4, qid 0 00:13:44.936 [2024-07-24 21:35:29.908380] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24a3240, cid 6, qid 0 00:13:44.936 [2024-07-24 21:35:29.908399] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24a33c0, cid 7, qid 0 00:13:44.936 [2024-07-24 21:35:29.908522] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:44.936 [2024-07-24 21:35:29.908529] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:44.936 [2024-07-24 21:35:29.908533] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:44.936 [2024-07-24 21:35:29.908536] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x24612c0): datao=0, datal=8192, cccid=5 00:13:44.936 [2024-07-24 21:35:29.908541] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24a30c0) on tqpair(0x24612c0): expected_datao=0, payload_size=8192 00:13:44.936 [2024-07-24 21:35:29.908545] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.936 [2024-07-24 21:35:29.908561] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:44.936 [2024-07-24 21:35:29.908566] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:44.936 [2024-07-24 21:35:29.908572] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:44.936 [2024-07-24 21:35:29.908578] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:44.937 [2024-07-24 21:35:29.908581] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:44.937 [2024-07-24 21:35:29.908585] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x24612c0): datao=0, datal=512, cccid=4 00:13:44.937 [2024-07-24 21:35:29.908589] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24a2f40) on tqpair(0x24612c0): expected_datao=0, payload_size=512 00:13:44.937 [2024-07-24 21:35:29.908593] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.937 [2024-07-24 21:35:29.908599] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:44.937 [2024-07-24 21:35:29.908603] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:44.937 [2024-07-24 21:35:29.908608] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:44.937 [2024-07-24 21:35:29.908614] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:44.937 [2024-07-24 21:35:29.908618] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:44.937 [2024-07-24 21:35:29.908621] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x24612c0): datao=0, datal=512, cccid=6 00:13:44.937 [2024-07-24 21:35:29.908626] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24a3240) on tqpair(0x24612c0): expected_datao=0, payload_size=512 00:13:44.937 [2024-07-24 21:35:29.908630] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.937 [2024-07-24 21:35:29.908636] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:44.937 [2024-07-24 21:35:29.908639] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:44.937 [2024-07-24 21:35:29.908660] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:44.937 [2024-07-24 21:35:29.912734] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:44.937 [2024-07-24 21:35:29.912742] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:44.937 [2024-07-24 21:35:29.912746] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x24612c0): datao=0, datal=4096, cccid=7 00:13:44.937 [2024-07-24 21:35:29.912750] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24a33c0) on tqpair(0x24612c0): expected_datao=0, payload_size=4096 00:13:44.937 [2024-07-24 21:35:29.912755] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.937 [2024-07-24 21:35:29.912762] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:44.937 [2024-07-24 21:35:29.912766] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:44.937 [2024-07-24 21:35:29.912776] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.937 [2024-07-24 21:35:29.912782] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.937 [2024-07-24 21:35:29.912785] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.937 ===================================================== 00:13:44.937 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:44.937 ===================================================== 00:13:44.937 Controller Capabilities/Features 00:13:44.937 ================================ 00:13:44.937 Vendor ID: 8086 00:13:44.937 Subsystem Vendor ID: 8086 00:13:44.937 Serial Number: SPDK00000000000001 00:13:44.937 Model Number: SPDK bdev Controller 00:13:44.937 Firmware Version: 24.09 00:13:44.937 Recommended Arb Burst: 6 00:13:44.937 IEEE OUI Identifier: e4 d2 5c 00:13:44.937 Multi-path I/O 00:13:44.937 May have multiple subsystem ports: Yes 00:13:44.937 May have multiple controllers: Yes 00:13:44.937 Associated with SR-IOV VF: No 00:13:44.937 Max Data Transfer Size: 131072 00:13:44.937 Max Number of Namespaces: 32 00:13:44.937 Max Number of I/O Queues: 127 00:13:44.937 NVMe Specification Version (VS): 1.3 00:13:44.937 NVMe Specification Version (Identify): 1.3 00:13:44.937 Maximum Queue Entries: 128 00:13:44.937 Contiguous Queues Required: Yes 00:13:44.937 Arbitration Mechanisms Supported 00:13:44.937 Weighted Round Robin: Not Supported 00:13:44.937 Vendor Specific: Not Supported 00:13:44.937 Reset Timeout: 15000 ms 00:13:44.937 Doorbell Stride: 4 bytes 00:13:44.937 NVM Subsystem Reset: Not Supported 00:13:44.937 Command Sets Supported 00:13:44.937 NVM Command Set: Supported 00:13:44.937 Boot Partition: Not Supported 00:13:44.937 Memory Page Size Minimum: 4096 bytes 00:13:44.937 Memory Page Size Maximum: 4096 bytes 00:13:44.937 Persistent Memory Region: Not Supported 00:13:44.937 Optional Asynchronous Events Supported 00:13:44.937 Namespace Attribute Notices: Supported 00:13:44.937 Firmware Activation Notices: Not Supported 00:13:44.937 ANA Change Notices: Not Supported 00:13:44.937 PLE Aggregate Log Change Notices: Not Supported 00:13:44.937 LBA Status Info Alert Notices: Not Supported 00:13:44.937 EGE Aggregate Log Change Notices: Not Supported 00:13:44.937 Normal NVM Subsystem Shutdown event: Not Supported 00:13:44.937 Zone Descriptor Change Notices: Not Supported 00:13:44.937 Discovery Log Change Notices: Not Supported 00:13:44.937 Controller Attributes 00:13:44.937 128-bit Host Identifier: Supported 00:13:44.937 Non-Operational Permissive Mode: Not Supported 00:13:44.937 NVM Sets: Not Supported 00:13:44.937 Read Recovery Levels: Not Supported 00:13:44.937 Endurance Groups: Not Supported 00:13:44.937 Predictable Latency Mode: Not Supported 00:13:44.937 Traffic Based Keep ALive: Not Supported 00:13:44.937 Namespace Granularity: Not Supported 00:13:44.937 SQ Associations: Not Supported 00:13:44.937 UUID List: Not Supported 00:13:44.937 Multi-Domain Subsystem: Not Supported 00:13:44.937 Fixed Capacity Management: Not Supported 00:13:44.937 Variable Capacity Management: Not Supported 00:13:44.937 Delete Endurance Group: Not Supported 00:13:44.937 Delete NVM Set: Not Supported 00:13:44.937 Extended LBA Formats Supported: Not Supported 00:13:44.937 Flexible Data Placement Supported: Not Supported 00:13:44.937 00:13:44.937 Controller Memory Buffer Support 00:13:44.937 ================================ 00:13:44.937 Supported: No 00:13:44.937 00:13:44.937 Persistent Memory Region Support 00:13:44.937 ================================ 00:13:44.937 Supported: No 00:13:44.937 00:13:44.937 Admin Command Set Attributes 00:13:44.937 ============================ 00:13:44.937 Security Send/Receive: Not Supported 00:13:44.937 Format NVM: Not Supported 00:13:44.937 Firmware Activate/Download: Not Supported 00:13:44.937 Namespace Management: Not Supported 00:13:44.937 Device Self-Test: Not Supported 00:13:44.937 Directives: Not Supported 00:13:44.937 NVMe-MI: Not Supported 00:13:44.937 Virtualization Management: Not Supported 00:13:44.937 Doorbell Buffer Config: Not Supported 00:13:44.937 Get LBA Status Capability: Not Supported 00:13:44.937 Command & Feature Lockdown Capability: Not Supported 00:13:44.937 Abort Command Limit: 4 00:13:44.937 Async Event Request Limit: 4 00:13:44.937 Number of Firmware Slots: N/A 00:13:44.937 Firmware Slot 1 Read-Only: N/A 00:13:44.937 Firmware Activation Without Reset: [2024-07-24 21:35:29.912789] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24a30c0) on tqpair=0x24612c0 00:13:44.937 [2024-07-24 21:35:29.912809] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.937 [2024-07-24 21:35:29.912815] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.937 [2024-07-24 21:35:29.912819] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.937 [2024-07-24 21:35:29.912822] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24a2f40) on tqpair=0x24612c0 00:13:44.937 [2024-07-24 21:35:29.912834] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.937 [2024-07-24 21:35:29.912840] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.937 [2024-07-24 21:35:29.912843] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.937 [2024-07-24 21:35:29.912847] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24a3240) on tqpair=0x24612c0 00:13:44.937 [2024-07-24 21:35:29.912869] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.937 [2024-07-24 21:35:29.912875] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.937 [2024-07-24 21:35:29.912887] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.937 [2024-07-24 21:35:29.912891] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24a33c0) on tqpair=0x24612c0 00:13:44.937 N/A 00:13:44.937 Multiple Update Detection Support: N/A 00:13:44.937 Firmware Update Granularity: No Information Provided 00:13:44.937 Per-Namespace SMART Log: No 00:13:44.937 Asymmetric Namespace Access Log Page: Not Supported 00:13:44.937 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:13:44.937 Command Effects Log Page: Supported 00:13:44.937 Get Log Page Extended Data: Supported 00:13:44.937 Telemetry Log Pages: Not Supported 00:13:44.937 Persistent Event Log Pages: Not Supported 00:13:44.937 Supported Log Pages Log Page: May Support 00:13:44.937 Commands Supported & Effects Log Page: Not Supported 00:13:44.937 Feature Identifiers & Effects Log Page:May Support 00:13:44.937 NVMe-MI Commands & Effects Log Page: May Support 00:13:44.937 Data Area 4 for Telemetry Log: Not Supported 00:13:44.937 Error Log Page Entries Supported: 128 00:13:44.937 Keep Alive: Supported 00:13:44.937 Keep Alive Granularity: 10000 ms 00:13:44.937 00:13:44.937 NVM Command Set Attributes 00:13:44.937 ========================== 00:13:44.937 Submission Queue Entry Size 00:13:44.937 Max: 64 00:13:44.937 Min: 64 00:13:44.937 Completion Queue Entry Size 00:13:44.937 Max: 16 00:13:44.937 Min: 16 00:13:44.937 Number of Namespaces: 32 00:13:44.937 Compare Command: Supported 00:13:44.937 Write Uncorrectable Command: Not Supported 00:13:44.937 Dataset Management Command: Supported 00:13:44.937 Write Zeroes Command: Supported 00:13:44.937 Set Features Save Field: Not Supported 00:13:44.937 Reservations: Supported 00:13:44.937 Timestamp: Not Supported 00:13:44.938 Copy: Supported 00:13:44.938 Volatile Write Cache: Present 00:13:44.938 Atomic Write Unit (Normal): 1 00:13:44.938 Atomic Write Unit (PFail): 1 00:13:44.938 Atomic Compare & Write Unit: 1 00:13:44.938 Fused Compare & Write: Supported 00:13:44.938 Scatter-Gather List 00:13:44.938 SGL Command Set: Supported 00:13:44.938 SGL Keyed: Supported 00:13:44.938 SGL Bit Bucket Descriptor: Not Supported 00:13:44.938 SGL Metadata Pointer: Not Supported 00:13:44.938 Oversized SGL: Not Supported 00:13:44.938 SGL Metadata Address: Not Supported 00:13:44.938 SGL Offset: Supported 00:13:44.938 Transport SGL Data Block: Not Supported 00:13:44.938 Replay Protected Memory Block: Not Supported 00:13:44.938 00:13:44.938 Firmware Slot Information 00:13:44.938 ========================= 00:13:44.938 Active slot: 1 00:13:44.938 Slot 1 Firmware Revision: 24.09 00:13:44.938 00:13:44.938 00:13:44.938 Commands Supported and Effects 00:13:44.938 ============================== 00:13:44.938 Admin Commands 00:13:44.938 -------------- 00:13:44.938 Get Log Page (02h): Supported 00:13:44.938 Identify (06h): Supported 00:13:44.938 Abort (08h): Supported 00:13:44.938 Set Features (09h): Supported 00:13:44.938 Get Features (0Ah): Supported 00:13:44.938 Asynchronous Event Request (0Ch): Supported 00:13:44.938 Keep Alive (18h): Supported 00:13:44.938 I/O Commands 00:13:44.938 ------------ 00:13:44.938 Flush (00h): Supported LBA-Change 00:13:44.938 Write (01h): Supported LBA-Change 00:13:44.938 Read (02h): Supported 00:13:44.938 Compare (05h): Supported 00:13:44.938 Write Zeroes (08h): Supported LBA-Change 00:13:44.938 Dataset Management (09h): Supported LBA-Change 00:13:44.938 Copy (19h): Supported LBA-Change 00:13:44.938 00:13:44.938 Error Log 00:13:44.938 ========= 00:13:44.938 00:13:44.938 Arbitration 00:13:44.938 =========== 00:13:44.938 Arbitration Burst: 1 00:13:44.938 00:13:44.938 Power Management 00:13:44.938 ================ 00:13:44.938 Number of Power States: 1 00:13:44.938 Current Power State: Power State #0 00:13:44.938 Power State #0: 00:13:44.938 Max Power: 0.00 W 00:13:44.938 Non-Operational State: Operational 00:13:44.938 Entry Latency: Not Reported 00:13:44.938 Exit Latency: Not Reported 00:13:44.938 Relative Read Throughput: 0 00:13:44.938 Relative Read Latency: 0 00:13:44.938 Relative Write Throughput: 0 00:13:44.938 Relative Write Latency: 0 00:13:44.938 Idle Power: Not Reported 00:13:44.938 Active Power: Not Reported 00:13:44.938 Non-Operational Permissive Mode: Not Supported 00:13:44.938 00:13:44.938 Health Information 00:13:44.938 ================== 00:13:44.938 Critical Warnings: 00:13:44.938 Available Spare Space: OK 00:13:44.938 Temperature: OK 00:13:44.938 Device Reliability: OK 00:13:44.938 Read Only: No 00:13:44.938 Volatile Memory Backup: OK 00:13:44.938 Current Temperature: 0 Kelvin (-273 Celsius) 00:13:44.938 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:13:44.938 Available Spare: 0% 00:13:44.938 Available Spare Threshold: 0% 00:13:44.938 Life Percentage Used:[2024-07-24 21:35:29.912994] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.938 [2024-07-24 21:35:29.913001] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x24612c0) 00:13:44.938 [2024-07-24 21:35:29.913009] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.938 [2024-07-24 21:35:29.913066] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24a33c0, cid 7, qid 0 00:13:44.938 [2024-07-24 21:35:29.913122] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.938 [2024-07-24 21:35:29.913129] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.938 [2024-07-24 21:35:29.913134] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.938 [2024-07-24 21:35:29.913138] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24a33c0) on tqpair=0x24612c0 00:13:44.938 [2024-07-24 21:35:29.913181] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:13:44.938 [2024-07-24 21:35:29.913193] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24a2940) on tqpair=0x24612c0 00:13:44.938 [2024-07-24 21:35:29.913200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:44.938 [2024-07-24 21:35:29.913205] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24a2ac0) on tqpair=0x24612c0 00:13:44.938 [2024-07-24 21:35:29.913210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:44.938 [2024-07-24 21:35:29.913216] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24a2c40) on tqpair=0x24612c0 00:13:44.938 [2024-07-24 21:35:29.913220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:44.938 [2024-07-24 21:35:29.913226] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24a2dc0) on tqpair=0x24612c0 00:13:44.938 [2024-07-24 21:35:29.913230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:44.938 [2024-07-24 21:35:29.913240] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.938 [2024-07-24 21:35:29.913245] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.938 [2024-07-24 21:35:29.913249] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x24612c0) 00:13:44.938 [2024-07-24 21:35:29.913257] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.938 [2024-07-24 21:35:29.913279] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24a2dc0, cid 3, qid 0 00:13:44.938 [2024-07-24 21:35:29.913333] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.938 [2024-07-24 21:35:29.913341] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.938 [2024-07-24 21:35:29.913345] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.938 [2024-07-24 21:35:29.913349] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24a2dc0) on tqpair=0x24612c0 00:13:44.938 [2024-07-24 21:35:29.913357] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.938 [2024-07-24 21:35:29.913362] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.938 [2024-07-24 21:35:29.913365] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x24612c0) 00:13:44.938 [2024-07-24 21:35:29.913373] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.938 [2024-07-24 21:35:29.913405] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24a2dc0, cid 3, qid 0 00:13:44.938 [2024-07-24 21:35:29.913520] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.938 [2024-07-24 21:35:29.913536] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.938 [2024-07-24 21:35:29.913541] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.938 [2024-07-24 21:35:29.913545] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24a2dc0) on tqpair=0x24612c0 00:13:44.938 [2024-07-24 21:35:29.913549] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:13:44.938 [2024-07-24 21:35:29.913554] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:13:44.938 [2024-07-24 21:35:29.913564] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.938 [2024-07-24 21:35:29.913569] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.938 [2024-07-24 21:35:29.913573] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x24612c0) 00:13:44.938 [2024-07-24 21:35:29.913580] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.938 [2024-07-24 21:35:29.913598] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24a2dc0, cid 3, qid 0 00:13:44.938 [2024-07-24 21:35:29.913662] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.938 [2024-07-24 21:35:29.913670] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.938 [2024-07-24 21:35:29.913674] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.938 [2024-07-24 21:35:29.913678] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24a2dc0) on tqpair=0x24612c0 00:13:44.938 [2024-07-24 21:35:29.913688] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.939 [2024-07-24 21:35:29.913693] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.939 [2024-07-24 21:35:29.913696] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x24612c0) 00:13:44.939 [2024-07-24 21:35:29.913703] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.939 [2024-07-24 21:35:29.913722] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24a2dc0, cid 3, qid 0 00:13:44.939 [2024-07-24 21:35:29.913774] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.939 [2024-07-24 21:35:29.913781] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.939 [2024-07-24 21:35:29.913785] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.939 [2024-07-24 21:35:29.913788] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24a2dc0) on tqpair=0x24612c0 00:13:44.939 [2024-07-24 21:35:29.913798] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.939 [2024-07-24 21:35:29.913802] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.939 [2024-07-24 21:35:29.913806] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x24612c0) 00:13:44.939 [2024-07-24 21:35:29.913813] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.939 [2024-07-24 21:35:29.913829] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24a2dc0, cid 3, qid 0 00:13:44.939 [2024-07-24 21:35:29.913878] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.939 [2024-07-24 21:35:29.913885] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.939 [2024-07-24 21:35:29.913888] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.939 [2024-07-24 21:35:29.913892] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24a2dc0) on tqpair=0x24612c0 00:13:44.939 [2024-07-24 21:35:29.913902] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.939 [2024-07-24 21:35:29.913906] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.939 [2024-07-24 21:35:29.913910] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x24612c0) 00:13:44.939 [2024-07-24 21:35:29.913916] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.939 [2024-07-24 21:35:29.913933] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24a2dc0, cid 3, qid 0 00:13:44.939 [2024-07-24 21:35:29.913985] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.939 [2024-07-24 21:35:29.913996] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.939 [2024-07-24 21:35:29.914000] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.939 [2024-07-24 21:35:29.914004] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24a2dc0) on tqpair=0x24612c0 00:13:44.939 [2024-07-24 21:35:29.914030] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.939 [2024-07-24 21:35:29.914035] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.939 [2024-07-24 21:35:29.914055] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x24612c0) 00:13:44.939 [2024-07-24 21:35:29.914062] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.939 [2024-07-24 21:35:29.914081] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24a2dc0, cid 3, qid 0 00:13:44.939 [2024-07-24 21:35:29.914142] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.939 [2024-07-24 21:35:29.914149] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.939 [2024-07-24 21:35:29.914153] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.939 [2024-07-24 21:35:29.914157] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24a2dc0) on tqpair=0x24612c0 00:13:44.939 [2024-07-24 21:35:29.914167] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.939 [2024-07-24 21:35:29.914172] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.939 [2024-07-24 21:35:29.914176] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x24612c0) 00:13:44.939 [2024-07-24 21:35:29.914184] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.939 [2024-07-24 21:35:29.914202] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24a2dc0, cid 3, qid 0 00:13:44.939 [2024-07-24 21:35:29.914250] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.939 [2024-07-24 21:35:29.914257] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.939 [2024-07-24 21:35:29.914261] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.939 [2024-07-24 21:35:29.914266] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24a2dc0) on tqpair=0x24612c0 00:13:44.939 [2024-07-24 21:35:29.914276] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.939 [2024-07-24 21:35:29.914280] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.939 [2024-07-24 21:35:29.914284] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x24612c0) 00:13:44.939 [2024-07-24 21:35:29.914292] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.939 [2024-07-24 21:35:29.914309] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24a2dc0, cid 3, qid 0 00:13:44.939 [2024-07-24 21:35:29.914373] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.939 [2024-07-24 21:35:29.914394] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.939 [2024-07-24 21:35:29.914398] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.939 [2024-07-24 21:35:29.914417] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24a2dc0) on tqpair=0x24612c0 00:13:44.939 [2024-07-24 21:35:29.914427] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.939 [2024-07-24 21:35:29.914431] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.939 [2024-07-24 21:35:29.914434] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x24612c0) 00:13:44.939 [2024-07-24 21:35:29.914441] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.939 [2024-07-24 21:35:29.914473] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24a2dc0, cid 3, qid 0 00:13:44.939 [2024-07-24 21:35:29.914526] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.939 [2024-07-24 21:35:29.914533] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.939 [2024-07-24 21:35:29.914542] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.939 [2024-07-24 21:35:29.914546] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24a2dc0) on tqpair=0x24612c0 00:13:44.939 [2024-07-24 21:35:29.914556] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.939 [2024-07-24 21:35:29.914560] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.939 [2024-07-24 21:35:29.914564] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x24612c0) 00:13:44.939 [2024-07-24 21:35:29.914571] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.939 [2024-07-24 21:35:29.914588] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24a2dc0, cid 3, qid 0 00:13:44.939 [2024-07-24 21:35:29.914635] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.939 [2024-07-24 21:35:29.914642] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.939 [2024-07-24 21:35:29.914646] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.939 [2024-07-24 21:35:29.914665] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24a2dc0) on tqpair=0x24612c0 00:13:44.939 [2024-07-24 21:35:29.914675] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.939 [2024-07-24 21:35:29.914680] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.939 [2024-07-24 21:35:29.914683] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x24612c0) 00:13:44.939 [2024-07-24 21:35:29.914690] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.939 [2024-07-24 21:35:29.914750] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24a2dc0, cid 3, qid 0 00:13:44.939 [2024-07-24 21:35:29.914806] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.939 [2024-07-24 21:35:29.914813] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.939 [2024-07-24 21:35:29.914817] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.939 [2024-07-24 21:35:29.914821] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24a2dc0) on tqpair=0x24612c0 00:13:44.939 [2024-07-24 21:35:29.914832] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.939 [2024-07-24 21:35:29.914837] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.939 [2024-07-24 21:35:29.914841] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x24612c0) 00:13:44.939 [2024-07-24 21:35:29.914848] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.939 [2024-07-24 21:35:29.914867] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24a2dc0, cid 3, qid 0 00:13:44.939 [2024-07-24 21:35:29.914917] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.939 [2024-07-24 21:35:29.914931] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.939 [2024-07-24 21:35:29.914936] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.939 [2024-07-24 21:35:29.914940] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24a2dc0) on tqpair=0x24612c0 00:13:44.939 [2024-07-24 21:35:29.914951] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.939 [2024-07-24 21:35:29.914955] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.939 [2024-07-24 21:35:29.914959] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x24612c0) 00:13:44.939 [2024-07-24 21:35:29.914966] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.939 [2024-07-24 21:35:29.914984] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24a2dc0, cid 3, qid 0 00:13:44.939 [2024-07-24 21:35:29.915050] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.939 [2024-07-24 21:35:29.915062] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.939 [2024-07-24 21:35:29.915067] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.939 [2024-07-24 21:35:29.915071] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24a2dc0) on tqpair=0x24612c0 00:13:44.939 [2024-07-24 21:35:29.915082] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.939 [2024-07-24 21:35:29.915087] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.940 [2024-07-24 21:35:29.915091] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x24612c0) 00:13:44.940 [2024-07-24 21:35:29.915098] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.940 [2024-07-24 21:35:29.915117] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24a2dc0, cid 3, qid 0 00:13:44.940 [2024-07-24 21:35:29.915170] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.940 [2024-07-24 21:35:29.915177] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.940 [2024-07-24 21:35:29.915181] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.940 [2024-07-24 21:35:29.915185] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24a2dc0) on tqpair=0x24612c0 00:13:44.940 [2024-07-24 21:35:29.915196] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.940 [2024-07-24 21:35:29.915200] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.940 [2024-07-24 21:35:29.915204] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x24612c0) 00:13:44.940 [2024-07-24 21:35:29.915211] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.940 [2024-07-24 21:35:29.915240] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24a2dc0, cid 3, qid 0 00:13:44.940 [2024-07-24 21:35:29.915288] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.940 [2024-07-24 21:35:29.915295] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.940 [2024-07-24 21:35:29.915299] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.940 [2024-07-24 21:35:29.915304] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24a2dc0) on tqpair=0x24612c0 00:13:44.940 [2024-07-24 21:35:29.915314] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.940 [2024-07-24 21:35:29.915334] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.940 [2024-07-24 21:35:29.915338] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x24612c0) 00:13:44.940 [2024-07-24 21:35:29.915360] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.940 [2024-07-24 21:35:29.915377] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24a2dc0, cid 3, qid 0 00:13:44.940 [2024-07-24 21:35:29.915443] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.940 [2024-07-24 21:35:29.915453] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.940 [2024-07-24 21:35:29.915457] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.940 [2024-07-24 21:35:29.915461] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24a2dc0) on tqpair=0x24612c0 00:13:44.940 [2024-07-24 21:35:29.915471] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.940 [2024-07-24 21:35:29.915476] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.940 [2024-07-24 21:35:29.915479] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x24612c0) 00:13:44.940 [2024-07-24 21:35:29.915486] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.940 [2024-07-24 21:35:29.915504] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24a2dc0, cid 3, qid 0 00:13:44.940 [2024-07-24 21:35:29.915548] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.940 [2024-07-24 21:35:29.915554] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.940 [2024-07-24 21:35:29.915558] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.940 [2024-07-24 21:35:29.915562] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24a2dc0) on tqpair=0x24612c0 00:13:44.940 [2024-07-24 21:35:29.915571] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.940 [2024-07-24 21:35:29.915576] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.940 [2024-07-24 21:35:29.915579] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x24612c0) 00:13:44.940 [2024-07-24 21:35:29.915586] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.940 [2024-07-24 21:35:29.915603] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24a2dc0, cid 3, qid 0 00:13:44.940 [2024-07-24 21:35:29.915662] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.940 [2024-07-24 21:35:29.915671] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.940 [2024-07-24 21:35:29.915675] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.940 [2024-07-24 21:35:29.915679] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24a2dc0) on tqpair=0x24612c0 00:13:44.940 [2024-07-24 21:35:29.915689] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.940 [2024-07-24 21:35:29.915693] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.940 [2024-07-24 21:35:29.915697] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x24612c0) 00:13:44.940 [2024-07-24 21:35:29.915704] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.940 [2024-07-24 21:35:29.915724] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24a2dc0, cid 3, qid 0 00:13:44.940 [2024-07-24 21:35:29.915773] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.940 [2024-07-24 21:35:29.915780] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.940 [2024-07-24 21:35:29.915783] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.940 [2024-07-24 21:35:29.915787] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24a2dc0) on tqpair=0x24612c0 00:13:44.940 [2024-07-24 21:35:29.915797] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.940 [2024-07-24 21:35:29.915802] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.940 [2024-07-24 21:35:29.915816] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x24612c0) 00:13:44.940 [2024-07-24 21:35:29.915823] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.940 [2024-07-24 21:35:29.915840] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24a2dc0, cid 3, qid 0 00:13:44.940 [2024-07-24 21:35:29.915890] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.940 [2024-07-24 21:35:29.915897] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.940 [2024-07-24 21:35:29.915900] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.940 [2024-07-24 21:35:29.915904] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24a2dc0) on tqpair=0x24612c0 00:13:44.940 [2024-07-24 21:35:29.915914] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.940 [2024-07-24 21:35:29.915919] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.940 [2024-07-24 21:35:29.915923] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x24612c0) 00:13:44.940 [2024-07-24 21:35:29.915930] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.940 [2024-07-24 21:35:29.915947] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24a2dc0, cid 3, qid 0 00:13:44.940 [2024-07-24 21:35:29.916016] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.940 [2024-07-24 21:35:29.916022] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.940 [2024-07-24 21:35:29.916042] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.940 [2024-07-24 21:35:29.916046] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24a2dc0) on tqpair=0x24612c0 00:13:44.940 [2024-07-24 21:35:29.916057] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.940 [2024-07-24 21:35:29.916061] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.940 [2024-07-24 21:35:29.916065] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x24612c0) 00:13:44.940 [2024-07-24 21:35:29.916072] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.940 [2024-07-24 21:35:29.916089] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24a2dc0, cid 3, qid 0 00:13:44.940 [2024-07-24 21:35:29.916141] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.940 [2024-07-24 21:35:29.916158] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.940 [2024-07-24 21:35:29.916163] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.940 [2024-07-24 21:35:29.916167] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24a2dc0) on tqpair=0x24612c0 00:13:44.940 [2024-07-24 21:35:29.916179] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.940 [2024-07-24 21:35:29.916183] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.940 [2024-07-24 21:35:29.916187] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x24612c0) 00:13:44.940 [2024-07-24 21:35:29.916195] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.940 [2024-07-24 21:35:29.916214] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24a2dc0, cid 3, qid 0 00:13:44.940 [2024-07-24 21:35:29.916260] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.940 [2024-07-24 21:35:29.916271] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.940 [2024-07-24 21:35:29.916276] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.940 [2024-07-24 21:35:29.916281] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24a2dc0) on tqpair=0x24612c0 00:13:44.940 [2024-07-24 21:35:29.916291] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.940 [2024-07-24 21:35:29.916296] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.940 [2024-07-24 21:35:29.916300] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x24612c0) 00:13:44.940 [2024-07-24 21:35:29.916308] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.940 [2024-07-24 21:35:29.916327] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24a2dc0, cid 3, qid 0 00:13:44.940 [2024-07-24 21:35:29.916398] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.940 [2024-07-24 21:35:29.916427] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.940 [2024-07-24 21:35:29.916432] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.940 [2024-07-24 21:35:29.916436] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24a2dc0) on tqpair=0x24612c0 00:13:44.940 [2024-07-24 21:35:29.916460] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.940 [2024-07-24 21:35:29.916465] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.940 [2024-07-24 21:35:29.916468] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x24612c0) 00:13:44.940 [2024-07-24 21:35:29.916475] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.940 [2024-07-24 21:35:29.916492] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24a2dc0, cid 3, qid 0 00:13:44.941 [2024-07-24 21:35:29.916540] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.941 [2024-07-24 21:35:29.916546] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.941 [2024-07-24 21:35:29.916550] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.941 [2024-07-24 21:35:29.916554] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24a2dc0) on tqpair=0x24612c0 00:13:44.941 [2024-07-24 21:35:29.916563] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.941 [2024-07-24 21:35:29.916567] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.941 [2024-07-24 21:35:29.916571] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x24612c0) 00:13:44.941 [2024-07-24 21:35:29.916577] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.941 [2024-07-24 21:35:29.916594] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24a2dc0, cid 3, qid 0 00:13:44.941 [2024-07-24 21:35:29.920759] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.941 [2024-07-24 21:35:29.920778] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.941 [2024-07-24 21:35:29.920799] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.941 [2024-07-24 21:35:29.920803] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24a2dc0) on tqpair=0x24612c0 00:13:44.941 [2024-07-24 21:35:29.920831] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.941 [2024-07-24 21:35:29.920836] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.941 [2024-07-24 21:35:29.920839] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x24612c0) 00:13:44.941 [2024-07-24 21:35:29.920847] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.941 [2024-07-24 21:35:29.920872] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24a2dc0, cid 3, qid 0 00:13:44.941 [2024-07-24 21:35:29.920926] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.941 [2024-07-24 21:35:29.920932] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.941 [2024-07-24 21:35:29.920936] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.941 [2024-07-24 21:35:29.920940] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24a2dc0) on tqpair=0x24612c0 00:13:44.941 [2024-07-24 21:35:29.920947] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:13:45.200 0% 00:13:45.200 Data Units Read: 0 00:13:45.200 Data Units Written: 0 00:13:45.200 Host Read Commands: 0 00:13:45.200 Host Write Commands: 0 00:13:45.200 Controller Busy Time: 0 minutes 00:13:45.200 Power Cycles: 0 00:13:45.200 Power On Hours: 0 hours 00:13:45.200 Unsafe Shutdowns: 0 00:13:45.200 Unrecoverable Media Errors: 0 00:13:45.200 Lifetime Error Log Entries: 0 00:13:45.200 Warning Temperature Time: 0 minutes 00:13:45.200 Critical Temperature Time: 0 minutes 00:13:45.200 00:13:45.200 Number of Queues 00:13:45.200 ================ 00:13:45.200 Number of I/O Submission Queues: 127 00:13:45.200 Number of I/O Completion Queues: 127 00:13:45.200 00:13:45.200 Active Namespaces 00:13:45.200 ================= 00:13:45.200 Namespace ID:1 00:13:45.200 Error Recovery Timeout: Unlimited 00:13:45.200 Command Set Identifier: NVM (00h) 00:13:45.200 Deallocate: Supported 00:13:45.200 Deallocated/Unwritten Error: Not Supported 00:13:45.200 Deallocated Read Value: Unknown 00:13:45.200 Deallocate in Write Zeroes: Not Supported 00:13:45.200 Deallocated Guard Field: 0xFFFF 00:13:45.200 Flush: Supported 00:13:45.200 Reservation: Supported 00:13:45.200 Namespace Sharing Capabilities: Multiple Controllers 00:13:45.200 Size (in LBAs): 131072 (0GiB) 00:13:45.200 Capacity (in LBAs): 131072 (0GiB) 00:13:45.200 Utilization (in LBAs): 131072 (0GiB) 00:13:45.200 NGUID: ABCDEF0123456789ABCDEF0123456789 00:13:45.200 EUI64: ABCDEF0123456789 00:13:45.200 UUID: 2c9f2428-baea-4fb5-b448-884232d21acb 00:13:45.200 Thin Provisioning: Not Supported 00:13:45.200 Per-NS Atomic Units: Yes 00:13:45.200 Atomic Boundary Size (Normal): 0 00:13:45.200 Atomic Boundary Size (PFail): 0 00:13:45.200 Atomic Boundary Offset: 0 00:13:45.200 Maximum Single Source Range Length: 65535 00:13:45.200 Maximum Copy Length: 65535 00:13:45.200 Maximum Source Range Count: 1 00:13:45.200 NGUID/EUI64 Never Reused: No 00:13:45.200 Namespace Write Protected: No 00:13:45.200 Number of LBA Formats: 1 00:13:45.200 Current LBA Format: LBA Format #00 00:13:45.200 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:45.200 00:13:45.200 21:35:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:13:45.200 21:35:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:45.200 21:35:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.200 21:35:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:13:45.200 21:35:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.200 21:35:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:13:45.200 21:35:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:13:45.200 21:35:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:45.200 21:35:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:13:45.200 21:35:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:45.200 21:35:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:13:45.200 21:35:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:45.200 21:35:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:45.200 rmmod nvme_tcp 00:13:45.200 rmmod nvme_fabrics 00:13:45.200 rmmod nvme_keyring 00:13:45.200 21:35:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:45.200 21:35:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:13:45.200 21:35:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:13:45.200 21:35:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 73727 ']' 00:13:45.200 21:35:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 73727 00:13:45.200 21:35:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@950 -- # '[' -z 73727 ']' 00:13:45.200 21:35:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # kill -0 73727 00:13:45.200 21:35:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # uname 00:13:45.200 21:35:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:45.200 21:35:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73727 00:13:45.200 killing process with pid 73727 00:13:45.200 21:35:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:45.200 21:35:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:45.200 21:35:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73727' 00:13:45.200 21:35:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@969 -- # kill 73727 00:13:45.200 21:35:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@974 -- # wait 73727 00:13:45.459 21:35:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:45.459 21:35:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:45.459 21:35:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:45.459 21:35:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:45.459 21:35:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:45.459 21:35:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:45.459 21:35:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:45.459 21:35:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:45.459 21:35:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:13:45.459 00:13:45.459 real 0m2.525s 00:13:45.459 user 0m6.784s 00:13:45.459 sys 0m0.688s 00:13:45.459 21:35:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:45.459 ************************************ 00:13:45.459 END TEST nvmf_identify 00:13:45.459 ************************************ 00:13:45.459 21:35:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:13:45.459 21:35:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:13:45.459 21:35:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:45.459 21:35:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:45.459 21:35:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:13:45.459 ************************************ 00:13:45.459 START TEST nvmf_perf 00:13:45.459 ************************************ 00:13:45.459 21:35:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:13:45.719 * Looking for test storage... 00:13:45.719 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:13:45.719 21:35:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:45.719 21:35:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:13:45.719 21:35:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:45.719 21:35:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:45.719 21:35:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:45.719 21:35:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:45.719 21:35:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:45.719 21:35:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:45.719 21:35:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:45.719 21:35:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:45.719 21:35:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:45.719 21:35:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:45.719 21:35:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:13:45.719 21:35:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:13:45.719 21:35:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:45.719 21:35:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:45.719 21:35:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:45.719 21:35:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:45.719 21:35:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:45.719 21:35:30 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:45.719 21:35:30 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:45.719 21:35:30 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:45.719 21:35:30 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.719 21:35:30 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.719 21:35:30 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.719 21:35:30 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:13:45.719 21:35:30 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.719 21:35:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:13:45.719 21:35:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:45.719 21:35:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:45.719 21:35:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:45.719 21:35:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:45.719 21:35:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:45.719 21:35:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:45.719 21:35:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:45.719 21:35:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:45.719 21:35:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:13:45.719 21:35:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:45.719 21:35:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:45.719 21:35:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:13:45.719 21:35:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:45.719 21:35:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:45.719 21:35:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:45.719 21:35:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:45.719 21:35:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:45.719 21:35:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:45.719 21:35:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:45.719 21:35:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:45.719 21:35:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:13:45.719 21:35:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:13:45.719 21:35:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:13:45.719 21:35:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:13:45.719 21:35:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:13:45.719 21:35:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # nvmf_veth_init 00:13:45.719 21:35:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:45.719 21:35:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:45.719 21:35:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:45.719 21:35:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:45.719 21:35:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:45.719 21:35:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:45.719 21:35:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:45.719 21:35:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:45.719 21:35:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:45.719 21:35:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:45.719 21:35:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:45.719 21:35:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:45.719 21:35:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:45.719 21:35:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:45.719 Cannot find device "nvmf_tgt_br" 00:13:45.719 21:35:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@155 -- # true 00:13:45.719 21:35:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:45.719 Cannot find device "nvmf_tgt_br2" 00:13:45.719 21:35:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # true 00:13:45.719 21:35:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:45.720 21:35:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:45.720 Cannot find device "nvmf_tgt_br" 00:13:45.720 21:35:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # true 00:13:45.720 21:35:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:45.720 Cannot find device "nvmf_tgt_br2" 00:13:45.720 21:35:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # true 00:13:45.720 21:35:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:45.720 21:35:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:45.720 21:35:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:45.720 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:45.720 21:35:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # true 00:13:45.720 21:35:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:45.720 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:45.720 21:35:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # true 00:13:45.720 21:35:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:45.720 21:35:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:45.720 21:35:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:45.720 21:35:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:45.720 21:35:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:45.979 21:35:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:45.979 21:35:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:45.979 21:35:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:45.979 21:35:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:45.979 21:35:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:45.979 21:35:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:45.979 21:35:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:45.980 21:35:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:45.980 21:35:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:45.980 21:35:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:45.980 21:35:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:45.980 21:35:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:45.980 21:35:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:45.980 21:35:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:45.980 21:35:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:45.980 21:35:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:45.980 21:35:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:45.980 21:35:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:45.980 21:35:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:45.980 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:45.980 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.081 ms 00:13:45.980 00:13:45.980 --- 10.0.0.2 ping statistics --- 00:13:45.980 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:45.980 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:13:45.980 21:35:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:45.980 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:45.980 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.037 ms 00:13:45.980 00:13:45.980 --- 10.0.0.3 ping statistics --- 00:13:45.980 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:45.980 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:13:45.980 21:35:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:45.980 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:45.980 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:13:45.980 00:13:45.980 --- 10.0.0.1 ping statistics --- 00:13:45.980 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:45.980 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:13:45.980 21:35:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:45.980 21:35:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@433 -- # return 0 00:13:45.980 21:35:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:45.980 21:35:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:45.980 21:35:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:45.980 21:35:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:45.980 21:35:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:45.980 21:35:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:45.980 21:35:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:45.980 21:35:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:13:45.980 21:35:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:45.980 21:35:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:45.980 21:35:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:13:45.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:45.980 21:35:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=73928 00:13:45.980 21:35:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 73928 00:13:45.980 21:35:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@831 -- # '[' -z 73928 ']' 00:13:45.980 21:35:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:45.980 21:35:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:45.980 21:35:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:45.980 21:35:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:45.980 21:35:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:45.980 21:35:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:13:45.980 [2024-07-24 21:35:30.966258] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:13:45.980 [2024-07-24 21:35:30.966897] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:46.239 [2024-07-24 21:35:31.103991] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:46.239 [2024-07-24 21:35:31.197699] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:46.239 [2024-07-24 21:35:31.197991] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:46.239 [2024-07-24 21:35:31.198197] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:46.239 [2024-07-24 21:35:31.198311] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:46.239 [2024-07-24 21:35:31.198545] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:46.239 [2024-07-24 21:35:31.198775] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:46.239 [2024-07-24 21:35:31.198910] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:46.239 [2024-07-24 21:35:31.199010] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:46.239 [2024-07-24 21:35:31.199013] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:46.498 [2024-07-24 21:35:31.255398] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:47.065 21:35:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:47.065 21:35:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # return 0 00:13:47.065 21:35:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:47.065 21:35:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:47.065 21:35:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:13:47.065 21:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:47.065 21:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:13:47.065 21:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:13:47.631 21:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:13:47.631 21:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:13:47.890 21:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:13:47.890 21:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:48.148 21:35:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:13:48.148 21:35:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:13:48.148 21:35:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:13:48.148 21:35:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:13:48.148 21:35:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:48.407 [2024-07-24 21:35:33.278373] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:48.407 21:35:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:48.666 21:35:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:13:48.666 21:35:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:48.924 21:35:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:13:48.924 21:35:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:13:49.183 21:35:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:49.183 [2024-07-24 21:35:34.168155] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:49.442 21:35:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:49.701 21:35:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:13:49.701 21:35:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:13:49.701 21:35:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:13:49.701 21:35:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:13:50.637 Initializing NVMe Controllers 00:13:50.637 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:13:50.637 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:13:50.637 Initialization complete. Launching workers. 00:13:50.637 ======================================================== 00:13:50.637 Latency(us) 00:13:50.637 Device Information : IOPS MiB/s Average min max 00:13:50.637 PCIE (0000:00:10.0) NSID 1 from core 0: 22020.08 86.02 1452.51 321.03 8377.85 00:13:50.637 ======================================================== 00:13:50.637 Total : 22020.08 86.02 1452.51 321.03 8377.85 00:13:50.637 00:13:50.637 21:35:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:13:52.035 Initializing NVMe Controllers 00:13:52.035 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:52.035 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:52.035 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:13:52.035 Initialization complete. Launching workers. 00:13:52.035 ======================================================== 00:13:52.035 Latency(us) 00:13:52.035 Device Information : IOPS MiB/s Average min max 00:13:52.035 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2774.12 10.84 360.20 108.33 8086.99 00:13:52.035 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 125.87 0.49 8031.06 6177.32 12098.31 00:13:52.035 ======================================================== 00:13:52.035 Total : 2899.99 11.33 693.14 108.33 12098.31 00:13:52.035 00:13:52.035 21:35:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:13:53.413 Initializing NVMe Controllers 00:13:53.413 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:53.413 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:53.413 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:13:53.413 Initialization complete. Launching workers. 00:13:53.413 ======================================================== 00:13:53.413 Latency(us) 00:13:53.413 Device Information : IOPS MiB/s Average min max 00:13:53.413 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8519.99 33.28 3763.04 556.88 9926.89 00:13:53.413 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3968.00 15.50 8102.45 6160.98 16272.73 00:13:53.413 ======================================================== 00:13:53.413 Total : 12487.99 48.78 5141.86 556.88 16272.73 00:13:53.413 00:13:53.413 21:35:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:13:53.413 21:35:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:13:55.946 Initializing NVMe Controllers 00:13:55.946 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:55.946 Controller IO queue size 128, less than required. 00:13:55.946 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:55.946 Controller IO queue size 128, less than required. 00:13:55.946 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:55.946 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:55.946 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:13:55.946 Initialization complete. Launching workers. 00:13:55.946 ======================================================== 00:13:55.946 Latency(us) 00:13:55.946 Device Information : IOPS MiB/s Average min max 00:13:55.946 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1799.90 449.98 72035.12 34618.04 104839.38 00:13:55.946 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 685.46 171.37 197049.23 58247.48 347936.31 00:13:55.946 ======================================================== 00:13:55.946 Total : 2485.36 621.34 106513.97 34618.04 347936.31 00:13:55.946 00:13:55.946 21:35:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:13:56.204 Initializing NVMe Controllers 00:13:56.204 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:56.204 Controller IO queue size 128, less than required. 00:13:56.204 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:56.204 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:13:56.204 Controller IO queue size 128, less than required. 00:13:56.204 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:56.204 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:13:56.204 WARNING: Some requested NVMe devices were skipped 00:13:56.204 No valid NVMe controllers or AIO or URING devices found 00:13:56.204 21:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:13:58.795 Initializing NVMe Controllers 00:13:58.795 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:58.795 Controller IO queue size 128, less than required. 00:13:58.795 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:58.795 Controller IO queue size 128, less than required. 00:13:58.795 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:58.795 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:58.795 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:13:58.795 Initialization complete. Launching workers. 00:13:58.795 00:13:58.795 ==================== 00:13:58.795 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:13:58.795 TCP transport: 00:13:58.795 polls: 12447 00:13:58.795 idle_polls: 9749 00:13:58.795 sock_completions: 2698 00:13:58.795 nvme_completions: 5219 00:13:58.795 submitted_requests: 7888 00:13:58.795 queued_requests: 1 00:13:58.795 00:13:58.795 ==================== 00:13:58.795 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:13:58.795 TCP transport: 00:13:58.795 polls: 8662 00:13:58.795 idle_polls: 4799 00:13:58.795 sock_completions: 3863 00:13:58.795 nvme_completions: 6011 00:13:58.795 submitted_requests: 9068 00:13:58.795 queued_requests: 1 00:13:58.795 ======================================================== 00:13:58.795 Latency(us) 00:13:58.795 Device Information : IOPS MiB/s Average min max 00:13:58.795 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1302.13 325.53 101397.30 45692.38 176020.44 00:13:58.795 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1499.77 374.94 86081.00 36403.79 129031.34 00:13:58.795 ======================================================== 00:13:58.795 Total : 2801.89 700.47 93198.96 36403.79 176020.44 00:13:58.795 00:13:58.795 21:35:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:13:58.795 21:35:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:59.053 21:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:13:59.053 21:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:13:59.053 21:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:13:59.053 21:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:59.053 21:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:13:59.053 21:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:59.053 21:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:13:59.053 21:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:59.053 21:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:59.053 rmmod nvme_tcp 00:13:59.053 rmmod nvme_fabrics 00:13:59.053 rmmod nvme_keyring 00:13:59.053 21:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:59.311 21:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:13:59.311 21:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:13:59.311 21:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 73928 ']' 00:13:59.311 21:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 73928 00:13:59.311 21:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@950 -- # '[' -z 73928 ']' 00:13:59.311 21:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # kill -0 73928 00:13:59.311 21:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # uname 00:13:59.311 21:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:59.311 21:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73928 00:13:59.311 killing process with pid 73928 00:13:59.311 21:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:59.311 21:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:59.311 21:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73928' 00:13:59.311 21:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@969 -- # kill 73928 00:13:59.311 21:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@974 -- # wait 73928 00:13:59.877 21:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:59.877 21:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:59.877 21:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:59.877 21:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:59.877 21:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:59.877 21:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:59.877 21:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:59.877 21:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:59.877 21:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:13:59.877 ************************************ 00:13:59.877 END TEST nvmf_perf 00:13:59.877 ************************************ 00:13:59.877 00:13:59.877 real 0m14.400s 00:13:59.877 user 0m52.753s 00:13:59.877 sys 0m4.011s 00:13:59.877 21:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:59.878 21:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:14:00.136 21:35:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:14:00.136 21:35:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:00.136 21:35:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:00.136 21:35:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:14:00.136 ************************************ 00:14:00.136 START TEST nvmf_fio_host 00:14:00.136 ************************************ 00:14:00.137 21:35:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:14:00.137 * Looking for test storage... 00:14:00.137 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:00.137 21:35:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:00.137 21:35:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:00.137 21:35:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:00.137 21:35:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:00.137 21:35:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.137 21:35:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.137 21:35:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.137 21:35:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:14:00.137 21:35:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.137 21:35:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:00.137 21:35:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:14:00.137 21:35:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:00.137 21:35:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:00.137 21:35:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:00.137 21:35:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:00.137 21:35:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:00.137 21:35:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:00.137 21:35:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:00.137 21:35:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:00.137 21:35:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:00.137 21:35:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:00.137 21:35:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:14:00.137 21:35:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:14:00.137 21:35:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:00.137 21:35:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:00.137 21:35:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:00.137 21:35:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:00.137 21:35:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:00.137 21:35:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:00.137 21:35:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:00.137 21:35:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:00.137 21:35:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.137 21:35:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.137 21:35:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.137 21:35:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:14:00.137 21:35:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.137 21:35:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:14:00.137 21:35:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:00.137 21:35:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:00.137 21:35:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:00.137 21:35:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:00.137 21:35:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:00.137 21:35:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:00.137 21:35:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:00.137 21:35:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:00.137 21:35:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:00.137 21:35:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:14:00.137 21:35:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:00.137 21:35:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:00.137 21:35:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:00.137 21:35:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:00.137 21:35:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:00.137 21:35:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:00.137 21:35:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:00.137 21:35:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:00.137 21:35:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:00.137 21:35:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:00.137 21:35:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:00.137 21:35:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:00.137 21:35:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:00.137 21:35:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:00.137 21:35:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:00.137 21:35:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:00.137 21:35:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:00.137 21:35:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:00.137 21:35:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:00.137 21:35:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:00.137 21:35:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:00.137 21:35:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:00.137 21:35:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:00.137 21:35:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:00.137 21:35:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:00.137 21:35:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:00.138 21:35:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:00.138 21:35:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:00.138 Cannot find device "nvmf_tgt_br" 00:14:00.138 21:35:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@155 -- # true 00:14:00.138 21:35:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:00.138 Cannot find device "nvmf_tgt_br2" 00:14:00.138 21:35:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # true 00:14:00.138 21:35:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:00.138 21:35:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:00.138 Cannot find device "nvmf_tgt_br" 00:14:00.138 21:35:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@158 -- # true 00:14:00.138 21:35:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:00.138 Cannot find device "nvmf_tgt_br2" 00:14:00.138 21:35:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # true 00:14:00.138 21:35:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:00.138 21:35:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:00.396 21:35:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:00.396 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:00.396 21:35:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:14:00.396 21:35:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:00.396 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:00.396 21:35:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:14:00.396 21:35:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:00.396 21:35:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:00.396 21:35:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:00.396 21:35:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:00.396 21:35:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:00.396 21:35:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:00.396 21:35:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:00.396 21:35:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:00.396 21:35:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:00.396 21:35:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:00.396 21:35:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:00.396 21:35:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:00.396 21:35:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:00.396 21:35:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:00.396 21:35:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:00.396 21:35:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:00.396 21:35:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:00.396 21:35:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:00.396 21:35:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:00.396 21:35:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:00.396 21:35:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:00.396 21:35:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:00.396 21:35:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:00.396 21:35:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:00.396 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:00.396 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:14:00.396 00:14:00.396 --- 10.0.0.2 ping statistics --- 00:14:00.396 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:00.396 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:14:00.396 21:35:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:00.396 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:00.396 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:14:00.396 00:14:00.396 --- 10.0.0.3 ping statistics --- 00:14:00.397 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:00.397 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:14:00.397 21:35:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:00.397 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:00.397 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:14:00.397 00:14:00.397 --- 10.0.0.1 ping statistics --- 00:14:00.397 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:00.397 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:14:00.397 21:35:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:00.397 21:35:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@433 -- # return 0 00:14:00.397 21:35:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:00.397 21:35:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:00.397 21:35:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:00.397 21:35:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:00.397 21:35:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:00.397 21:35:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:00.397 21:35:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:00.397 21:35:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:14:00.397 21:35:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:14:00.397 21:35:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:00.397 21:35:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:14:00.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:00.397 21:35:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=74338 00:14:00.397 21:35:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:00.397 21:35:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:00.397 21:35:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 74338 00:14:00.397 21:35:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@831 -- # '[' -z 74338 ']' 00:14:00.397 21:35:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:00.397 21:35:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:00.397 21:35:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:00.397 21:35:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:00.397 21:35:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:14:00.655 [2024-07-24 21:35:45.431829] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:14:00.655 [2024-07-24 21:35:45.432118] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:00.655 [2024-07-24 21:35:45.568218] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:00.913 [2024-07-24 21:35:45.668537] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:00.913 [2024-07-24 21:35:45.668994] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:00.913 [2024-07-24 21:35:45.669379] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:00.913 [2024-07-24 21:35:45.669606] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:00.913 [2024-07-24 21:35:45.669803] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:00.913 [2024-07-24 21:35:45.670026] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:00.913 [2024-07-24 21:35:45.670127] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:00.913 [2024-07-24 21:35:45.670198] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:00.913 [2024-07-24 21:35:45.670194] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:00.913 [2024-07-24 21:35:45.726335] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:01.480 21:35:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:01.480 21:35:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # return 0 00:14:01.480 21:35:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:01.739 [2024-07-24 21:35:46.545169] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:01.739 21:35:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:14:01.739 21:35:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:01.739 21:35:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:14:01.739 21:35:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:01.997 Malloc1 00:14:01.997 21:35:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:01.997 21:35:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:02.255 21:35:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:02.513 [2024-07-24 21:35:47.319013] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:02.513 21:35:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:02.513 21:35:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:14:02.513 21:35:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:14:02.513 21:35:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:14:02.513 21:35:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:14:02.513 21:35:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:02.513 21:35:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:14:02.513 21:35:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:02.513 21:35:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:14:02.513 21:35:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:14:02.513 21:35:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:14:02.771 21:35:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:14:02.771 21:35:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:02.771 21:35:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:14:02.771 21:35:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:14:02.771 21:35:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:14:02.771 21:35:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:14:02.771 21:35:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:02.771 21:35:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:14:02.771 21:35:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:14:02.771 21:35:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:14:02.771 21:35:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:14:02.771 21:35:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:14:02.771 21:35:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:14:02.771 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:14:02.771 fio-3.35 00:14:02.771 Starting 1 thread 00:14:05.303 00:14:05.303 test: (groupid=0, jobs=1): err= 0: pid=74417: Wed Jul 24 21:35:49 2024 00:14:05.303 read: IOPS=8749, BW=34.2MiB/s (35.8MB/s)(68.6MiB/2008msec) 00:14:05.303 slat (nsec): min=1897, max=334815, avg=2489.91, stdev=3445.79 00:14:05.303 clat (usec): min=2621, max=16778, avg=7616.97, stdev=638.19 00:14:05.303 lat (usec): min=2664, max=16780, avg=7619.46, stdev=637.95 00:14:05.303 clat percentiles (usec): 00:14:05.303 | 1.00th=[ 6456], 5.00th=[ 6849], 10.00th=[ 6980], 20.00th=[ 7177], 00:14:05.303 | 30.00th=[ 7308], 40.00th=[ 7439], 50.00th=[ 7570], 60.00th=[ 7701], 00:14:05.303 | 70.00th=[ 7832], 80.00th=[ 8029], 90.00th=[ 8225], 95.00th=[ 8455], 00:14:05.303 | 99.00th=[ 9372], 99.50th=[10028], 99.90th=[14746], 99.95th=[15795], 00:14:05.303 | 99.99th=[16712] 00:14:05.303 bw ( KiB/s): min=34416, max=35256, per=100.00%, avg=35022.00, stdev=405.95, samples=4 00:14:05.303 iops : min= 8604, max= 8814, avg=8755.50, stdev=101.49, samples=4 00:14:05.303 write: IOPS=8755, BW=34.2MiB/s (35.9MB/s)(68.7MiB/2008msec); 0 zone resets 00:14:05.303 slat (nsec): min=1966, max=254607, avg=2622.63, stdev=2650.52 00:14:05.303 clat (usec): min=2482, max=16547, avg=6946.69, stdev=592.65 00:14:05.303 lat (usec): min=2496, max=16549, avg=6949.32, stdev=592.51 00:14:05.303 clat percentiles (usec): 00:14:05.304 | 1.00th=[ 5932], 5.00th=[ 6259], 10.00th=[ 6390], 20.00th=[ 6587], 00:14:05.304 | 30.00th=[ 6718], 40.00th=[ 6783], 50.00th=[ 6915], 60.00th=[ 6980], 00:14:05.304 | 70.00th=[ 7111], 80.00th=[ 7242], 90.00th=[ 7504], 95.00th=[ 7701], 00:14:05.304 | 99.00th=[ 8586], 99.50th=[ 9241], 99.90th=[13698], 99.95th=[15795], 00:14:05.304 | 99.99th=[16450] 00:14:05.304 bw ( KiB/s): min=34440, max=35480, per=100.00%, avg=35026.00, stdev=441.64, samples=4 00:14:05.304 iops : min= 8610, max= 8870, avg=8756.50, stdev=110.41, samples=4 00:14:05.304 lat (msec) : 4=0.08%, 10=99.53%, 20=0.39% 00:14:05.304 cpu : usr=70.95%, sys=21.92%, ctx=7, majf=0, minf=7 00:14:05.304 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:14:05.304 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:05.304 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:05.304 issued rwts: total=17569,17582,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:05.304 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:05.304 00:14:05.304 Run status group 0 (all jobs): 00:14:05.304 READ: bw=34.2MiB/s (35.8MB/s), 34.2MiB/s-34.2MiB/s (35.8MB/s-35.8MB/s), io=68.6MiB (72.0MB), run=2008-2008msec 00:14:05.304 WRITE: bw=34.2MiB/s (35.9MB/s), 34.2MiB/s-34.2MiB/s (35.9MB/s-35.9MB/s), io=68.7MiB (72.0MB), run=2008-2008msec 00:14:05.304 21:35:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:14:05.304 21:35:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:14:05.304 21:35:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:14:05.304 21:35:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:05.304 21:35:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:14:05.304 21:35:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:05.304 21:35:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:14:05.304 21:35:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:14:05.304 21:35:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:14:05.304 21:35:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:14:05.304 21:35:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:05.304 21:35:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:14:05.304 21:35:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:14:05.304 21:35:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:14:05.304 21:35:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:14:05.304 21:35:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:05.304 21:35:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:14:05.304 21:35:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:14:05.304 21:35:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:14:05.304 21:35:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:14:05.304 21:35:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:14:05.304 21:35:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:14:05.304 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:14:05.304 fio-3.35 00:14:05.304 Starting 1 thread 00:14:07.838 00:14:07.838 test: (groupid=0, jobs=1): err= 0: pid=74460: Wed Jul 24 21:35:52 2024 00:14:07.838 read: IOPS=8202, BW=128MiB/s (134MB/s)(257MiB/2004msec) 00:14:07.838 slat (usec): min=2, max=116, avg= 3.71, stdev= 2.52 00:14:07.838 clat (usec): min=1683, max=21869, avg=8945.94, stdev=2713.18 00:14:07.838 lat (usec): min=1686, max=21872, avg=8949.65, stdev=2713.25 00:14:07.838 clat percentiles (usec): 00:14:07.838 | 1.00th=[ 4080], 5.00th=[ 4948], 10.00th=[ 5473], 20.00th=[ 6325], 00:14:07.838 | 30.00th=[ 7111], 40.00th=[ 7963], 50.00th=[ 8848], 60.00th=[ 9634], 00:14:07.838 | 70.00th=[10552], 80.00th=[11469], 90.00th=[12387], 95.00th=[13304], 00:14:07.838 | 99.00th=[15270], 99.50th=[15795], 99.90th=[18482], 99.95th=[19006], 00:14:07.838 | 99.99th=[19268] 00:14:07.838 bw ( KiB/s): min=53024, max=77632, per=49.90%, avg=65488.00, stdev=10148.27, samples=4 00:14:07.838 iops : min= 3314, max= 4852, avg=4093.00, stdev=634.27, samples=4 00:14:07.838 write: IOPS=4843, BW=75.7MiB/s (79.4MB/s)(135MiB/1779msec); 0 zone resets 00:14:07.838 slat (usec): min=30, max=192, avg=37.87, stdev= 8.36 00:14:07.838 clat (usec): min=3721, max=21981, avg=12123.09, stdev=2250.23 00:14:07.838 lat (usec): min=3769, max=22018, avg=12160.95, stdev=2250.39 00:14:07.838 clat percentiles (usec): 00:14:07.838 | 1.00th=[ 7570], 5.00th=[ 8979], 10.00th=[ 9634], 20.00th=[10290], 00:14:07.838 | 30.00th=[10945], 40.00th=[11338], 50.00th=[11731], 60.00th=[12256], 00:14:07.838 | 70.00th=[13042], 80.00th=[13960], 90.00th=[15139], 95.00th=[16188], 00:14:07.838 | 99.00th=[18482], 99.50th=[18744], 99.90th=[19792], 99.95th=[21365], 00:14:07.838 | 99.99th=[21890] 00:14:07.838 bw ( KiB/s): min=55744, max=80000, per=88.26%, avg=68400.00, stdev=10044.31, samples=4 00:14:07.838 iops : min= 3484, max= 5000, avg=4275.00, stdev=627.77, samples=4 00:14:07.838 lat (msec) : 2=0.01%, 4=0.51%, 10=46.62%, 20=52.82%, 50=0.04% 00:14:07.838 cpu : usr=79.88%, sys=15.33%, ctx=3, majf=0, minf=14 00:14:07.838 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:14:07.838 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:07.838 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:07.838 issued rwts: total=16438,8617,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:07.838 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:07.838 00:14:07.838 Run status group 0 (all jobs): 00:14:07.838 READ: bw=128MiB/s (134MB/s), 128MiB/s-128MiB/s (134MB/s-134MB/s), io=257MiB (269MB), run=2004-2004msec 00:14:07.838 WRITE: bw=75.7MiB/s (79.4MB/s), 75.7MiB/s-75.7MiB/s (79.4MB/s-79.4MB/s), io=135MiB (141MB), run=1779-1779msec 00:14:07.838 21:35:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:07.838 21:35:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:14:07.838 21:35:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:14:07.838 21:35:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:14:07.838 21:35:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:14:07.838 21:35:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:07.838 21:35:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:14:07.838 21:35:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:07.838 21:35:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:14:07.838 21:35:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:07.838 21:35:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:07.838 rmmod nvme_tcp 00:14:07.838 rmmod nvme_fabrics 00:14:07.838 rmmod nvme_keyring 00:14:08.097 21:35:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:08.097 21:35:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:14:08.098 21:35:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:14:08.098 21:35:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 74338 ']' 00:14:08.098 21:35:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 74338 00:14:08.098 21:35:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@950 -- # '[' -z 74338 ']' 00:14:08.098 21:35:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # kill -0 74338 00:14:08.098 21:35:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # uname 00:14:08.098 21:35:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:08.098 21:35:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74338 00:14:08.098 21:35:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:08.098 21:35:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:08.098 21:35:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74338' 00:14:08.098 killing process with pid 74338 00:14:08.098 21:35:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@969 -- # kill 74338 00:14:08.098 21:35:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@974 -- # wait 74338 00:14:08.357 21:35:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:08.357 21:35:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:08.357 21:35:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:08.357 21:35:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:08.357 21:35:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:08.357 21:35:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:08.357 21:35:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:08.357 21:35:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:08.357 21:35:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:08.357 ************************************ 00:14:08.357 END TEST nvmf_fio_host 00:14:08.357 ************************************ 00:14:08.357 00:14:08.357 real 0m8.274s 00:14:08.357 user 0m33.628s 00:14:08.357 sys 0m2.221s 00:14:08.357 21:35:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:08.357 21:35:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:14:08.357 21:35:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:14:08.357 21:35:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:08.357 21:35:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:08.357 21:35:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:14:08.357 ************************************ 00:14:08.357 START TEST nvmf_failover 00:14:08.357 ************************************ 00:14:08.357 21:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:14:08.357 * Looking for test storage... 00:14:08.357 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:08.357 21:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:08.357 21:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:14:08.357 21:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:08.357 21:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:08.357 21:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:08.357 21:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:08.357 21:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:08.357 21:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:08.357 21:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:08.357 21:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:08.357 21:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:08.357 21:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:08.357 21:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:14:08.357 21:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:14:08.357 21:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:08.357 21:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:08.357 21:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:08.357 21:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:08.357 21:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:08.357 21:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:08.357 21:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:08.357 21:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:08.357 21:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.357 21:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.357 21:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.357 21:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:14:08.357 21:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.357 21:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:14:08.357 21:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:08.357 21:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:08.357 21:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:08.357 21:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:08.357 21:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:08.358 21:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:08.358 21:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:08.358 21:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:08.358 21:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:08.358 21:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:08.358 21:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:08.358 21:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:08.358 21:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:14:08.358 21:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:08.358 21:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:08.358 21:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:08.358 21:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:08.358 21:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:08.358 21:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:08.358 21:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:08.358 21:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:08.358 21:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:08.358 21:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:08.358 21:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:08.358 21:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:08.358 21:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:08.358 21:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:08.358 21:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:08.358 21:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:08.358 21:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:08.358 21:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:08.358 21:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:08.358 21:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:08.358 21:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:08.358 21:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:08.358 21:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:08.358 21:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:08.358 21:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:08.358 21:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:08.358 21:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:08.619 21:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:08.619 Cannot find device "nvmf_tgt_br" 00:14:08.619 21:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@155 -- # true 00:14:08.619 21:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:08.619 Cannot find device "nvmf_tgt_br2" 00:14:08.619 21:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # true 00:14:08.619 21:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:08.619 21:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:08.619 Cannot find device "nvmf_tgt_br" 00:14:08.619 21:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@158 -- # true 00:14:08.619 21:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:08.619 Cannot find device "nvmf_tgt_br2" 00:14:08.619 21:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # true 00:14:08.619 21:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:08.619 21:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:08.619 21:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:08.619 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:08.619 21:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # true 00:14:08.619 21:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:08.619 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:08.619 21:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # true 00:14:08.619 21:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:08.619 21:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:08.619 21:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:08.619 21:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:08.619 21:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:08.619 21:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:08.619 21:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:08.619 21:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:08.619 21:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:08.619 21:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:08.619 21:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:08.619 21:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:08.619 21:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:08.619 21:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:08.619 21:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:08.619 21:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:08.619 21:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:08.880 21:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:08.880 21:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:08.880 21:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:08.880 21:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:08.880 21:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:08.880 21:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:08.880 21:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:08.880 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:08.880 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:14:08.880 00:14:08.880 --- 10.0.0.2 ping statistics --- 00:14:08.880 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:08.880 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:14:08.880 21:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:08.880 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:08.880 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:14:08.880 00:14:08.880 --- 10.0.0.3 ping statistics --- 00:14:08.880 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:08.880 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:14:08.880 21:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:08.880 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:08.880 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:14:08.880 00:14:08.880 --- 10.0.0.1 ping statistics --- 00:14:08.880 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:08.880 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:14:08.880 21:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:08.880 21:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@433 -- # return 0 00:14:08.880 21:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:08.880 21:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:08.880 21:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:08.880 21:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:08.880 21:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:08.880 21:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:08.880 21:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:08.880 21:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:14:08.880 21:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:08.880 21:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:08.880 21:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:14:08.880 21:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=74684 00:14:08.880 21:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:08.880 21:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 74684 00:14:08.880 21:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 74684 ']' 00:14:08.880 21:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:08.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:08.880 21:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:08.880 21:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:08.880 21:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:08.880 21:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:14:08.880 [2024-07-24 21:35:53.771425] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:14:08.880 [2024-07-24 21:35:53.771489] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:09.137 [2024-07-24 21:35:53.909012] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:09.137 [2024-07-24 21:35:54.016466] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:09.137 [2024-07-24 21:35:54.016776] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:09.137 [2024-07-24 21:35:54.016937] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:09.137 [2024-07-24 21:35:54.017080] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:09.137 [2024-07-24 21:35:54.017130] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:09.137 [2024-07-24 21:35:54.017399] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:09.138 [2024-07-24 21:35:54.017493] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:09.138 [2024-07-24 21:35:54.017498] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:09.138 [2024-07-24 21:35:54.077328] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:09.702 21:35:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:09.702 21:35:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:14:09.702 21:35:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:09.702 21:35:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:09.702 21:35:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:14:09.960 21:35:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:09.960 21:35:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:10.217 [2024-07-24 21:35:55.004701] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:10.217 21:35:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:14:10.475 Malloc0 00:14:10.475 21:35:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:10.734 21:35:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:10.992 21:35:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:11.251 [2024-07-24 21:35:56.041387] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:11.251 21:35:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:14:11.509 [2024-07-24 21:35:56.269787] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:14:11.509 21:35:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:14:11.509 [2024-07-24 21:35:56.478040] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:14:11.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:11.509 21:35:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=74742 00:14:11.509 21:35:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:14:11.509 21:35:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:11.509 21:35:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 74742 /var/tmp/bdevperf.sock 00:14:11.509 21:35:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 74742 ']' 00:14:11.509 21:35:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:11.509 21:35:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:11.509 21:35:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:11.509 21:35:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:11.509 21:35:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:14:12.885 21:35:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:12.885 21:35:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:14:12.885 21:35:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:12.885 NVMe0n1 00:14:12.885 21:35:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:13.143 00:14:13.143 21:35:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=74770 00:14:13.143 21:35:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:13.143 21:35:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:14:14.518 21:35:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:14.518 21:35:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:14:17.801 21:36:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:17.801 00:14:17.801 21:36:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:14:18.059 21:36:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:14:21.344 21:36:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:21.344 [2024-07-24 21:36:06.177856] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:21.344 21:36:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:14:22.280 21:36:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:14:22.539 21:36:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 74770 00:14:29.148 0 00:14:29.148 21:36:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 74742 00:14:29.148 21:36:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 74742 ']' 00:14:29.148 21:36:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 74742 00:14:29.148 21:36:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:14:29.148 21:36:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:29.148 21:36:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74742 00:14:29.148 killing process with pid 74742 00:14:29.148 21:36:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:29.148 21:36:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:29.148 21:36:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74742' 00:14:29.148 21:36:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 74742 00:14:29.148 21:36:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 74742 00:14:29.148 21:36:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:14:29.148 [2024-07-24 21:35:56.548438] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:14:29.148 [2024-07-24 21:35:56.548546] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74742 ] 00:14:29.148 [2024-07-24 21:35:56.688435] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:29.148 [2024-07-24 21:35:56.789312] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:29.148 [2024-07-24 21:35:56.846103] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:29.148 Running I/O for 15 seconds... 00:14:29.148 [2024-07-24 21:35:59.346372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:84480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.148 [2024-07-24 21:35:59.346466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.148 [2024-07-24 21:35:59.346509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:84488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.148 [2024-07-24 21:35:59.346524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.148 [2024-07-24 21:35:59.346539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:84496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.148 [2024-07-24 21:35:59.346553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.148 [2024-07-24 21:35:59.346567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:84504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.148 [2024-07-24 21:35:59.346581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.148 [2024-07-24 21:35:59.346594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:84512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.148 [2024-07-24 21:35:59.346608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.148 [2024-07-24 21:35:59.346623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:84520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.148 [2024-07-24 21:35:59.346636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.148 [2024-07-24 21:35:59.346662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:84528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.148 [2024-07-24 21:35:59.346679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.148 [2024-07-24 21:35:59.346694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:84536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.149 [2024-07-24 21:35:59.346707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.149 [2024-07-24 21:35:59.346722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:83968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.149 [2024-07-24 21:35:59.346736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.149 [2024-07-24 21:35:59.346751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:83976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.149 [2024-07-24 21:35:59.346765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.149 [2024-07-24 21:35:59.346806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:83984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.149 [2024-07-24 21:35:59.346851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.149 [2024-07-24 21:35:59.346870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:83992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.149 [2024-07-24 21:35:59.346886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.149 [2024-07-24 21:35:59.346902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:84000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.149 [2024-07-24 21:35:59.346917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.149 [2024-07-24 21:35:59.346933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.149 [2024-07-24 21:35:59.346949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.149 [2024-07-24 21:35:59.346965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:84016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.149 [2024-07-24 21:35:59.346980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.149 [2024-07-24 21:35:59.346996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:84024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.149 [2024-07-24 21:35:59.347011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.149 [2024-07-24 21:35:59.347027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:84032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.149 [2024-07-24 21:35:59.347049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.149 [2024-07-24 21:35:59.347066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:84040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.149 [2024-07-24 21:35:59.347080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.149 [2024-07-24 21:35:59.347096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:84048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.149 [2024-07-24 21:35:59.347111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.149 [2024-07-24 21:35:59.347126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:84056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.149 [2024-07-24 21:35:59.347141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.149 [2024-07-24 21:35:59.347156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:84064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.149 [2024-07-24 21:35:59.347171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.149 [2024-07-24 21:35:59.347186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:84072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.149 [2024-07-24 21:35:59.347208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.149 [2024-07-24 21:35:59.347238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:84080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.149 [2024-07-24 21:35:59.347252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.149 [2024-07-24 21:35:59.347289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:84088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.149 [2024-07-24 21:35:59.347304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.149 [2024-07-24 21:35:59.347319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:84544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.149 [2024-07-24 21:35:59.347333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.149 [2024-07-24 21:35:59.347348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:84552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.149 [2024-07-24 21:35:59.347377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.149 [2024-07-24 21:35:59.347391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:84560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.149 [2024-07-24 21:35:59.347420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.149 [2024-07-24 21:35:59.347434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:84568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.149 [2024-07-24 21:35:59.347447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.149 [2024-07-24 21:35:59.347461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:84576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.149 [2024-07-24 21:35:59.347475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.149 [2024-07-24 21:35:59.347489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:84584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.149 [2024-07-24 21:35:59.347502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.149 [2024-07-24 21:35:59.347524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:84592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.149 [2024-07-24 21:35:59.347539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.149 [2024-07-24 21:35:59.347553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:84600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.149 [2024-07-24 21:35:59.347566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.149 [2024-07-24 21:35:59.347580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:84096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.149 [2024-07-24 21:35:59.347598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.149 [2024-07-24 21:35:59.347613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:84104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.149 [2024-07-24 21:35:59.347626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.149 [2024-07-24 21:35:59.347640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:84112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.149 [2024-07-24 21:35:59.347653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.149 [2024-07-24 21:35:59.347668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:84120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.149 [2024-07-24 21:35:59.347681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.149 [2024-07-24 21:35:59.347717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:84128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.149 [2024-07-24 21:35:59.347733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.149 [2024-07-24 21:35:59.347748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:84136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.149 [2024-07-24 21:35:59.347761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.149 [2024-07-24 21:35:59.347775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:84144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.149 [2024-07-24 21:35:59.347788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.149 [2024-07-24 21:35:59.347802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:84152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.149 [2024-07-24 21:35:59.347815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.149 [2024-07-24 21:35:59.347829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:84160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.149 [2024-07-24 21:35:59.347842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.149 [2024-07-24 21:35:59.347856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:84168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.149 [2024-07-24 21:35:59.347869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.149 [2024-07-24 21:35:59.347883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:84176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.149 [2024-07-24 21:35:59.347896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.149 [2024-07-24 21:35:59.347910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:84184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.149 [2024-07-24 21:35:59.347923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.149 [2024-07-24 21:35:59.347937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:84192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.149 [2024-07-24 21:35:59.347950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.149 [2024-07-24 21:35:59.347966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:84200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.149 [2024-07-24 21:35:59.347979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.149 [2024-07-24 21:35:59.347997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:84208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.150 [2024-07-24 21:35:59.348011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.150 [2024-07-24 21:35:59.348025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:84216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.150 [2024-07-24 21:35:59.348038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.150 [2024-07-24 21:35:59.348052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:84224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.150 [2024-07-24 21:35:59.348093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.150 [2024-07-24 21:35:59.348110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:84232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.150 [2024-07-24 21:35:59.348125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.150 [2024-07-24 21:35:59.348140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:84240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.150 [2024-07-24 21:35:59.348154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.150 [2024-07-24 21:35:59.348168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:84248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.150 [2024-07-24 21:35:59.348183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.150 [2024-07-24 21:35:59.348198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:84256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.150 [2024-07-24 21:35:59.348211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.150 [2024-07-24 21:35:59.348226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:84264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.150 [2024-07-24 21:35:59.348240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.150 [2024-07-24 21:35:59.348254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:84272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.150 [2024-07-24 21:35:59.348268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.150 [2024-07-24 21:35:59.348283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:84280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.150 [2024-07-24 21:35:59.348297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.150 [2024-07-24 21:35:59.348312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:84608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.150 [2024-07-24 21:35:59.348325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.150 [2024-07-24 21:35:59.348340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:84616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.150 [2024-07-24 21:35:59.348355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.150 [2024-07-24 21:35:59.348370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:84624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.150 [2024-07-24 21:35:59.348384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.150 [2024-07-24 21:35:59.348399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:84632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.150 [2024-07-24 21:35:59.348428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.150 [2024-07-24 21:35:59.348442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:84640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.150 [2024-07-24 21:35:59.348470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.150 [2024-07-24 21:35:59.348490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:84648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.150 [2024-07-24 21:35:59.348504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.150 [2024-07-24 21:35:59.348522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:84656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.150 [2024-07-24 21:35:59.348536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.150 [2024-07-24 21:35:59.348550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:84664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.150 [2024-07-24 21:35:59.348563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.150 [2024-07-24 21:35:59.348577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.150 [2024-07-24 21:35:59.348595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.150 [2024-07-24 21:35:59.348609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:84680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.150 [2024-07-24 21:35:59.348622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.150 [2024-07-24 21:35:59.348636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:84688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.150 [2024-07-24 21:35:59.348649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.150 [2024-07-24 21:35:59.348672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:84696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.150 [2024-07-24 21:35:59.348687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.150 [2024-07-24 21:35:59.348702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:84704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.150 [2024-07-24 21:35:59.348716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.150 [2024-07-24 21:35:59.348730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:84712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.150 [2024-07-24 21:35:59.348743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.150 [2024-07-24 21:35:59.348757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:84720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.150 [2024-07-24 21:35:59.348770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.150 [2024-07-24 21:35:59.348784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:84728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.150 [2024-07-24 21:35:59.348797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.150 [2024-07-24 21:35:59.348826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:84288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.150 [2024-07-24 21:35:59.348840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.150 [2024-07-24 21:35:59.348854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:84296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.150 [2024-07-24 21:35:59.348873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.150 [2024-07-24 21:35:59.348889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:84304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.150 [2024-07-24 21:35:59.348919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.150 [2024-07-24 21:35:59.348934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:84312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.150 [2024-07-24 21:35:59.348948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.150 [2024-07-24 21:35:59.348963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:84320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.150 [2024-07-24 21:35:59.348977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.150 [2024-07-24 21:35:59.348992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:84328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.150 [2024-07-24 21:35:59.349006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.150 [2024-07-24 21:35:59.349026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:84336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.150 [2024-07-24 21:35:59.349041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.150 [2024-07-24 21:35:59.349056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:84344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.150 [2024-07-24 21:35:59.349071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.150 [2024-07-24 21:35:59.349103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:84352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.150 [2024-07-24 21:35:59.349122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.150 [2024-07-24 21:35:59.349138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:84360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.150 [2024-07-24 21:35:59.349153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.150 [2024-07-24 21:35:59.349169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:84368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.150 [2024-07-24 21:35:59.349184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.150 [2024-07-24 21:35:59.349200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:84376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.150 [2024-07-24 21:35:59.349215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.150 [2024-07-24 21:35:59.349246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:84384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.150 [2024-07-24 21:35:59.349260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.150 [2024-07-24 21:35:59.349276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:84392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.150 [2024-07-24 21:35:59.349290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.150 [2024-07-24 21:35:59.349305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:84400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.151 [2024-07-24 21:35:59.349327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.151 [2024-07-24 21:35:59.349342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:84408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.151 [2024-07-24 21:35:59.349357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.151 [2024-07-24 21:35:59.349372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:84736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.151 [2024-07-24 21:35:59.349387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.151 [2024-07-24 21:35:59.349417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:84744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.151 [2024-07-24 21:35:59.349431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.151 [2024-07-24 21:35:59.349460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:84752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.151 [2024-07-24 21:35:59.349489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.151 [2024-07-24 21:35:59.349503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:84760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.151 [2024-07-24 21:35:59.349516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.151 [2024-07-24 21:35:59.349530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:84768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.151 [2024-07-24 21:35:59.349543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.151 [2024-07-24 21:35:59.349557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:84776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.151 [2024-07-24 21:35:59.349570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.151 [2024-07-24 21:35:59.349589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:84784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.151 [2024-07-24 21:35:59.349602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.151 [2024-07-24 21:35:59.349617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:84792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.151 [2024-07-24 21:35:59.349631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.151 [2024-07-24 21:35:59.349645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:84416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.151 [2024-07-24 21:35:59.349662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.151 [2024-07-24 21:35:59.349677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:84424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.151 [2024-07-24 21:35:59.349690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.151 [2024-07-24 21:35:59.349704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:84432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.151 [2024-07-24 21:35:59.349717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.151 [2024-07-24 21:35:59.349749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:84440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.151 [2024-07-24 21:35:59.349765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.151 [2024-07-24 21:35:59.349779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:84448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.151 [2024-07-24 21:35:59.349792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.151 [2024-07-24 21:35:59.349806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:84456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.151 [2024-07-24 21:35:59.349820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.151 [2024-07-24 21:35:59.349833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:84464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.151 [2024-07-24 21:35:59.349847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.151 [2024-07-24 21:35:59.349860] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b0830 is same with the state(5) to be set 00:14:29.151 [2024-07-24 21:35:59.349877] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:29.151 [2024-07-24 21:35:59.349886] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:29.151 [2024-07-24 21:35:59.349896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84472 len:8 PRP1 0x0 PRP2 0x0 00:14:29.151 [2024-07-24 21:35:59.349909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.151 [2024-07-24 21:35:59.349922] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:29.151 [2024-07-24 21:35:59.349932] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:29.151 [2024-07-24 21:35:59.349941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84800 len:8 PRP1 0x0 PRP2 0x0 00:14:29.151 [2024-07-24 21:35:59.349953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.151 [2024-07-24 21:35:59.349965] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:29.151 [2024-07-24 21:35:59.349974] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:29.151 [2024-07-24 21:35:59.349983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84808 len:8 PRP1 0x0 PRP2 0x0 00:14:29.151 [2024-07-24 21:35:59.349995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.151 [2024-07-24 21:35:59.350008] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:29.151 [2024-07-24 21:35:59.350022] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:29.151 [2024-07-24 21:35:59.350032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84816 len:8 PRP1 0x0 PRP2 0x0 00:14:29.151 [2024-07-24 21:35:59.350044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.151 [2024-07-24 21:35:59.350056] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:29.151 [2024-07-24 21:35:59.350065] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:29.151 [2024-07-24 21:35:59.350095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84824 len:8 PRP1 0x0 PRP2 0x0 00:14:29.151 [2024-07-24 21:35:59.350125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.151 [2024-07-24 21:35:59.350144] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:29.151 [2024-07-24 21:35:59.350154] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:29.151 [2024-07-24 21:35:59.350164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84832 len:8 PRP1 0x0 PRP2 0x0 00:14:29.151 [2024-07-24 21:35:59.350177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.151 [2024-07-24 21:35:59.350191] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:29.151 [2024-07-24 21:35:59.350201] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:29.151 [2024-07-24 21:35:59.350211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84840 len:8 PRP1 0x0 PRP2 0x0 00:14:29.151 [2024-07-24 21:35:59.350223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.151 [2024-07-24 21:35:59.350236] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:29.151 [2024-07-24 21:35:59.350247] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:29.151 [2024-07-24 21:35:59.350257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84848 len:8 PRP1 0x0 PRP2 0x0 00:14:29.151 [2024-07-24 21:35:59.350270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.151 [2024-07-24 21:35:59.350282] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:29.151 [2024-07-24 21:35:59.350293] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:29.151 [2024-07-24 21:35:59.350303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84856 len:8 PRP1 0x0 PRP2 0x0 00:14:29.151 [2024-07-24 21:35:59.350315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.151 [2024-07-24 21:35:59.350328] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:29.151 [2024-07-24 21:35:59.350338] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:29.151 [2024-07-24 21:35:59.350348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84864 len:8 PRP1 0x0 PRP2 0x0 00:14:29.151 [2024-07-24 21:35:59.350360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.151 [2024-07-24 21:35:59.350373] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:29.151 [2024-07-24 21:35:59.350383] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:29.151 [2024-07-24 21:35:59.350394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84872 len:8 PRP1 0x0 PRP2 0x0 00:14:29.151 [2024-07-24 21:35:59.350406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.151 [2024-07-24 21:35:59.350434] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:29.151 [2024-07-24 21:35:59.350448] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:29.151 [2024-07-24 21:35:59.350473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84880 len:8 PRP1 0x0 PRP2 0x0 00:14:29.151 [2024-07-24 21:35:59.350486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.151 [2024-07-24 21:35:59.350498] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:29.151 [2024-07-24 21:35:59.350507] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:29.151 [2024-07-24 21:35:59.350521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84888 len:8 PRP1 0x0 PRP2 0x0 00:14:29.151 [2024-07-24 21:35:59.350538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.151 [2024-07-24 21:35:59.350551] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:29.152 [2024-07-24 21:35:59.350561] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:29.152 [2024-07-24 21:35:59.350570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84896 len:8 PRP1 0x0 PRP2 0x0 00:14:29.152 [2024-07-24 21:35:59.350582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.152 [2024-07-24 21:35:59.350594] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:29.152 [2024-07-24 21:35:59.350604] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:29.152 [2024-07-24 21:35:59.350614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84904 len:8 PRP1 0x0 PRP2 0x0 00:14:29.152 [2024-07-24 21:35:59.350625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.152 [2024-07-24 21:35:59.350638] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:29.152 [2024-07-24 21:35:59.350648] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:29.152 [2024-07-24 21:35:59.350657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84912 len:8 PRP1 0x0 PRP2 0x0 00:14:29.152 [2024-07-24 21:35:59.350669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.152 [2024-07-24 21:35:59.350690] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:29.152 [2024-07-24 21:35:59.350702] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:29.152 [2024-07-24 21:35:59.350711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84920 len:8 PRP1 0x0 PRP2 0x0 00:14:29.152 [2024-07-24 21:35:59.350723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.152 [2024-07-24 21:35:59.350735] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:29.152 [2024-07-24 21:35:59.350745] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:29.152 [2024-07-24 21:35:59.350770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84928 len:8 PRP1 0x0 PRP2 0x0 00:14:29.152 [2024-07-24 21:35:59.350807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.152 [2024-07-24 21:35:59.350823] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:29.152 [2024-07-24 21:35:59.350833] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:29.152 [2024-07-24 21:35:59.350844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84936 len:8 PRP1 0x0 PRP2 0x0 00:14:29.152 [2024-07-24 21:35:59.350857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.152 [2024-07-24 21:35:59.350871] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:29.152 [2024-07-24 21:35:59.350886] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:29.152 [2024-07-24 21:35:59.350898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84944 len:8 PRP1 0x0 PRP2 0x0 00:14:29.152 [2024-07-24 21:35:59.350911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.152 [2024-07-24 21:35:59.350925] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:29.152 [2024-07-24 21:35:59.350946] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:29.152 [2024-07-24 21:35:59.350962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84952 len:8 PRP1 0x0 PRP2 0x0 00:14:29.152 [2024-07-24 21:35:59.350975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.152 [2024-07-24 21:35:59.350990] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:29.152 [2024-07-24 21:35:59.351000] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:29.152 [2024-07-24 21:35:59.351011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84960 len:8 PRP1 0x0 PRP2 0x0 00:14:29.152 [2024-07-24 21:35:59.351024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.152 [2024-07-24 21:35:59.351038] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:29.152 [2024-07-24 21:35:59.351049] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:29.152 [2024-07-24 21:35:59.351059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84968 len:8 PRP1 0x0 PRP2 0x0 00:14:29.152 [2024-07-24 21:35:59.351073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.152 [2024-07-24 21:35:59.351086] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:29.152 [2024-07-24 21:35:59.351097] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:29.152 [2024-07-24 21:35:59.351107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84976 len:8 PRP1 0x0 PRP2 0x0 00:14:29.152 [2024-07-24 21:35:59.351141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.152 [2024-07-24 21:35:59.351154] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:29.152 [2024-07-24 21:35:59.351164] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:29.152 [2024-07-24 21:35:59.351174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84984 len:8 PRP1 0x0 PRP2 0x0 00:14:29.152 [2024-07-24 21:35:59.351186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.152 [2024-07-24 21:35:59.351258] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20b0830 was disconnected and freed. reset controller. 00:14:29.152 [2024-07-24 21:35:59.351275] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:14:29.152 [2024-07-24 21:35:59.351330] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:29.152 [2024-07-24 21:35:59.351349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.152 [2024-07-24 21:35:59.351378] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:29.152 [2024-07-24 21:35:59.351391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.152 [2024-07-24 21:35:59.351404] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:29.152 [2024-07-24 21:35:59.351431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.152 [2024-07-24 21:35:59.351443] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:29.152 [2024-07-24 21:35:59.351461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.152 [2024-07-24 21:35:59.351483] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:14:29.152 [2024-07-24 21:35:59.351541] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2041570 (9): Bad file descriptor 00:14:29.152 [2024-07-24 21:35:59.355327] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:14:29.152 [2024-07-24 21:35:59.388464] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:29.152 [2024-07-24 21:36:02.912848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:119208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.152 [2024-07-24 21:36:02.912906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.152 [2024-07-24 21:36:02.912933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:119216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.152 [2024-07-24 21:36:02.912948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.152 [2024-07-24 21:36:02.912963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:119224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.152 [2024-07-24 21:36:02.912976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.152 [2024-07-24 21:36:02.912990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:119232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.152 [2024-07-24 21:36:02.913004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.152 [2024-07-24 21:36:02.913018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:119240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.152 [2024-07-24 21:36:02.913046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.152 [2024-07-24 21:36:02.913059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:119248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.152 [2024-07-24 21:36:02.913072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.152 [2024-07-24 21:36:02.913085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:119256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.152 [2024-07-24 21:36:02.913116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.152 [2024-07-24 21:36:02.913130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:119264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.152 [2024-07-24 21:36:02.913144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.152 [2024-07-24 21:36:02.913159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:119688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.152 [2024-07-24 21:36:02.913173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.152 [2024-07-24 21:36:02.913187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:119696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.152 [2024-07-24 21:36:02.913200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.152 [2024-07-24 21:36:02.913214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:119704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.152 [2024-07-24 21:36:02.913229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.152 [2024-07-24 21:36:02.913267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:119712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.152 [2024-07-24 21:36:02.913282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.152 [2024-07-24 21:36:02.913296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:119720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.152 [2024-07-24 21:36:02.913309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.153 [2024-07-24 21:36:02.913323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:119728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.153 [2024-07-24 21:36:02.913336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.153 [2024-07-24 21:36:02.913350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:119736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.153 [2024-07-24 21:36:02.913362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.153 [2024-07-24 21:36:02.913376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:119744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.153 [2024-07-24 21:36:02.913388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.153 [2024-07-24 21:36:02.913402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:119752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.153 [2024-07-24 21:36:02.913414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.153 [2024-07-24 21:36:02.913430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:119760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.153 [2024-07-24 21:36:02.913443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.153 [2024-07-24 21:36:02.913459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:119768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.153 [2024-07-24 21:36:02.913481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.153 [2024-07-24 21:36:02.913494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:119776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.153 [2024-07-24 21:36:02.913507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.153 [2024-07-24 21:36:02.913521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:119272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.153 [2024-07-24 21:36:02.913533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.153 [2024-07-24 21:36:02.913547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:119280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.153 [2024-07-24 21:36:02.913559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.153 [2024-07-24 21:36:02.913573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:119288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.153 [2024-07-24 21:36:02.913585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.153 [2024-07-24 21:36:02.913599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:119296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.153 [2024-07-24 21:36:02.913618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.153 [2024-07-24 21:36:02.913633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:119304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.153 [2024-07-24 21:36:02.913646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.153 [2024-07-24 21:36:02.913675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:119312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.153 [2024-07-24 21:36:02.913690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.153 [2024-07-24 21:36:02.913703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:119320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.153 [2024-07-24 21:36:02.913717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.153 [2024-07-24 21:36:02.913730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:119328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.153 [2024-07-24 21:36:02.913743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.153 [2024-07-24 21:36:02.913757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:119784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.153 [2024-07-24 21:36:02.913770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.153 [2024-07-24 21:36:02.913783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:119792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.153 [2024-07-24 21:36:02.913796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.153 [2024-07-24 21:36:02.913809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:119800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.153 [2024-07-24 21:36:02.913822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.153 [2024-07-24 21:36:02.913835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:119808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.153 [2024-07-24 21:36:02.913848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.153 [2024-07-24 21:36:02.913862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:119816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.153 [2024-07-24 21:36:02.913875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.153 [2024-07-24 21:36:02.913889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:119824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.153 [2024-07-24 21:36:02.913902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.153 [2024-07-24 21:36:02.913916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:119832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.153 [2024-07-24 21:36:02.913929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.153 [2024-07-24 21:36:02.913942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:119840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.153 [2024-07-24 21:36:02.913955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.153 [2024-07-24 21:36:02.913968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:119336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.153 [2024-07-24 21:36:02.913988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.153 [2024-07-24 21:36:02.914003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:119344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.153 [2024-07-24 21:36:02.914015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.153 [2024-07-24 21:36:02.914029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:119352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.153 [2024-07-24 21:36:02.914042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.153 [2024-07-24 21:36:02.914055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:119360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.153 [2024-07-24 21:36:02.914068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.153 [2024-07-24 21:36:02.914082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:119368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.153 [2024-07-24 21:36:02.914094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.153 [2024-07-24 21:36:02.914108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:119376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.153 [2024-07-24 21:36:02.914121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.153 [2024-07-24 21:36:02.914134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:119384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.153 [2024-07-24 21:36:02.914147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.153 [2024-07-24 21:36:02.914160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:119392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.153 [2024-07-24 21:36:02.914173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.153 [2024-07-24 21:36:02.914186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:119848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.153 [2024-07-24 21:36:02.914199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.153 [2024-07-24 21:36:02.914212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:119856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.153 [2024-07-24 21:36:02.914225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.154 [2024-07-24 21:36:02.914238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:119864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.154 [2024-07-24 21:36:02.914251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.154 [2024-07-24 21:36:02.914265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:119872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.154 [2024-07-24 21:36:02.914278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.154 [2024-07-24 21:36:02.914292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:119880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.154 [2024-07-24 21:36:02.914306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.154 [2024-07-24 21:36:02.914325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:119888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.154 [2024-07-24 21:36:02.914338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.154 [2024-07-24 21:36:02.914352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:119896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.154 [2024-07-24 21:36:02.914365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.154 [2024-07-24 21:36:02.914378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:119904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.154 [2024-07-24 21:36:02.914392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.154 [2024-07-24 21:36:02.914406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:119912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.154 [2024-07-24 21:36:02.914419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.154 [2024-07-24 21:36:02.914432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:119920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.154 [2024-07-24 21:36:02.914445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.154 [2024-07-24 21:36:02.914458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:119928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.154 [2024-07-24 21:36:02.914473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.154 [2024-07-24 21:36:02.914486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:119936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.154 [2024-07-24 21:36:02.914499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.154 [2024-07-24 21:36:02.914512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:119944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.154 [2024-07-24 21:36:02.914525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.154 [2024-07-24 21:36:02.914539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:119952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.154 [2024-07-24 21:36:02.914551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.154 [2024-07-24 21:36:02.914565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:119960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.154 [2024-07-24 21:36:02.914577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.154 [2024-07-24 21:36:02.914591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:119968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.154 [2024-07-24 21:36:02.914603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.154 [2024-07-24 21:36:02.914617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:119400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.154 [2024-07-24 21:36:02.914640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.154 [2024-07-24 21:36:02.914654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:119408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.154 [2024-07-24 21:36:02.914673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.154 [2024-07-24 21:36:02.914688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:119416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.154 [2024-07-24 21:36:02.914701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.154 [2024-07-24 21:36:02.914714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:119424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.154 [2024-07-24 21:36:02.914727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.154 [2024-07-24 21:36:02.914741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:119432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.154 [2024-07-24 21:36:02.914755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.154 [2024-07-24 21:36:02.914769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:119440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.154 [2024-07-24 21:36:02.914781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.154 [2024-07-24 21:36:02.914825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:119448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.154 [2024-07-24 21:36:02.914841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.154 [2024-07-24 21:36:02.914856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:119456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.154 [2024-07-24 21:36:02.914869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.154 [2024-07-24 21:36:02.914883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:119976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.154 [2024-07-24 21:36:02.914897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.154 [2024-07-24 21:36:02.914912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:119984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.154 [2024-07-24 21:36:02.914926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.154 [2024-07-24 21:36:02.914941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:119992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.154 [2024-07-24 21:36:02.914955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.154 [2024-07-24 21:36:02.914969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:120000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.154 [2024-07-24 21:36:02.914983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.154 [2024-07-24 21:36:02.914997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:120008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.154 [2024-07-24 21:36:02.915011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.154 [2024-07-24 21:36:02.915025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:120016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.154 [2024-07-24 21:36:02.915038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.154 [2024-07-24 21:36:02.915060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:120024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.154 [2024-07-24 21:36:02.915074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.154 [2024-07-24 21:36:02.915089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:120032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.154 [2024-07-24 21:36:02.915103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.154 [2024-07-24 21:36:02.915132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:119464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.154 [2024-07-24 21:36:02.915145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.154 [2024-07-24 21:36:02.915159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:119472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.154 [2024-07-24 21:36:02.915172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.154 [2024-07-24 21:36:02.915186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:119480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.154 [2024-07-24 21:36:02.915199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.154 [2024-07-24 21:36:02.915227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:119488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.154 [2024-07-24 21:36:02.915240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.154 [2024-07-24 21:36:02.915254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:119496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.154 [2024-07-24 21:36:02.915267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.154 [2024-07-24 21:36:02.915281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:119504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.154 [2024-07-24 21:36:02.915294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.154 [2024-07-24 21:36:02.915308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:119512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.154 [2024-07-24 21:36:02.915321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.154 [2024-07-24 21:36:02.915335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:119520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.154 [2024-07-24 21:36:02.915348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.154 [2024-07-24 21:36:02.915376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:119528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.154 [2024-07-24 21:36:02.915389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.154 [2024-07-24 21:36:02.915403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:119536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.155 [2024-07-24 21:36:02.915416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.155 [2024-07-24 21:36:02.915431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:119544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.155 [2024-07-24 21:36:02.915449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.155 [2024-07-24 21:36:02.915464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:119552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.155 [2024-07-24 21:36:02.915477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.155 [2024-07-24 21:36:02.915491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:119560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.155 [2024-07-24 21:36:02.915505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.155 [2024-07-24 21:36:02.915519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:119568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.155 [2024-07-24 21:36:02.915532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.155 [2024-07-24 21:36:02.915545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:119576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.155 [2024-07-24 21:36:02.915559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.155 [2024-07-24 21:36:02.915573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:119584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.155 [2024-07-24 21:36:02.915586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.155 [2024-07-24 21:36:02.915600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:120040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.155 [2024-07-24 21:36:02.915613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.155 [2024-07-24 21:36:02.915628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:120048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.155 [2024-07-24 21:36:02.915643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.155 [2024-07-24 21:36:02.915657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:120056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.155 [2024-07-24 21:36:02.915670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.155 [2024-07-24 21:36:02.915696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:120064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.155 [2024-07-24 21:36:02.915710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.155 [2024-07-24 21:36:02.915724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:120072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.155 [2024-07-24 21:36:02.915745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.155 [2024-07-24 21:36:02.915760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:120080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.155 [2024-07-24 21:36:02.915774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.155 [2024-07-24 21:36:02.915788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:120088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.155 [2024-07-24 21:36:02.915801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.155 [2024-07-24 21:36:02.915822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:120096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.155 [2024-07-24 21:36:02.915836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.155 [2024-07-24 21:36:02.915850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:120104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.155 [2024-07-24 21:36:02.915864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.155 [2024-07-24 21:36:02.915878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:120112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.155 [2024-07-24 21:36:02.915891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.155 [2024-07-24 21:36:02.915906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:120120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.155 [2024-07-24 21:36:02.915919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.155 [2024-07-24 21:36:02.915933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:120128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.155 [2024-07-24 21:36:02.915946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.155 [2024-07-24 21:36:02.915960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:120136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.155 [2024-07-24 21:36:02.915973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.155 [2024-07-24 21:36:02.915987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:120144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.155 [2024-07-24 21:36:02.916001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.155 [2024-07-24 21:36:02.916015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:120152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.155 [2024-07-24 21:36:02.916028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.155 [2024-07-24 21:36:02.916042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:120160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.155 [2024-07-24 21:36:02.916056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.155 [2024-07-24 21:36:02.916070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:120168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.155 [2024-07-24 21:36:02.916083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.155 [2024-07-24 21:36:02.916113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:120176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.155 [2024-07-24 21:36:02.916127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.155 [2024-07-24 21:36:02.916142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:120184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.155 [2024-07-24 21:36:02.916155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.155 [2024-07-24 21:36:02.916169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:120192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.155 [2024-07-24 21:36:02.916183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.155 [2024-07-24 21:36:02.916203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:120200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.155 [2024-07-24 21:36:02.916222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.155 [2024-07-24 21:36:02.916237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:120208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.155 [2024-07-24 21:36:02.916251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.155 [2024-07-24 21:36:02.916265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:120216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.155 [2024-07-24 21:36:02.916279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.155 [2024-07-24 21:36:02.916293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:120224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.155 [2024-07-24 21:36:02.916307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.155 [2024-07-24 21:36:02.916321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:119592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.155 [2024-07-24 21:36:02.916335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.155 [2024-07-24 21:36:02.916349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:119600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.155 [2024-07-24 21:36:02.916363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.155 [2024-07-24 21:36:02.916391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:119608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.155 [2024-07-24 21:36:02.916404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.155 [2024-07-24 21:36:02.916418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:119616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.155 [2024-07-24 21:36:02.916432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.155 [2024-07-24 21:36:02.916445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:119624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.155 [2024-07-24 21:36:02.916473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.155 [2024-07-24 21:36:02.916486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:119632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.155 [2024-07-24 21:36:02.916499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.155 [2024-07-24 21:36:02.916513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:119640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.155 [2024-07-24 21:36:02.916526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.155 [2024-07-24 21:36:02.916539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:119648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.155 [2024-07-24 21:36:02.916552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.155 [2024-07-24 21:36:02.916565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:119656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.156 [2024-07-24 21:36:02.916587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.156 [2024-07-24 21:36:02.916602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:119664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.156 [2024-07-24 21:36:02.916615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.156 [2024-07-24 21:36:02.916629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:119672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.156 [2024-07-24 21:36:02.916671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.156 [2024-07-24 21:36:02.916731] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:29.156 [2024-07-24 21:36:02.916747] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:29.156 [2024-07-24 21:36:02.916758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:119680 len:8 PRP1 0x0 PRP2 0x0 00:14:29.156 [2024-07-24 21:36:02.916776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.156 [2024-07-24 21:36:02.916834] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20b0d50 was disconnected and freed. reset controller. 00:14:29.156 [2024-07-24 21:36:02.916851] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:14:29.156 [2024-07-24 21:36:02.916917] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:29.156 [2024-07-24 21:36:02.916936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.156 [2024-07-24 21:36:02.916950] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:29.156 [2024-07-24 21:36:02.916963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.156 [2024-07-24 21:36:02.916977] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:29.156 [2024-07-24 21:36:02.916989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.156 [2024-07-24 21:36:02.917003] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:29.156 [2024-07-24 21:36:02.917016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.156 [2024-07-24 21:36:02.917028] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:14:29.156 [2024-07-24 21:36:02.920404] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:14:29.156 [2024-07-24 21:36:02.920440] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2041570 (9): Bad file descriptor 00:14:29.156 [2024-07-24 21:36:02.953024] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:29.156 [2024-07-24 21:36:07.447827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:96920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.156 [2024-07-24 21:36:07.447906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.156 [2024-07-24 21:36:07.447951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:96928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.156 [2024-07-24 21:36:07.447967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.156 [2024-07-24 21:36:07.448021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:96936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.156 [2024-07-24 21:36:07.448035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.156 [2024-07-24 21:36:07.448050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:96944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.156 [2024-07-24 21:36:07.448063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.156 [2024-07-24 21:36:07.448077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:96952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.156 [2024-07-24 21:36:07.448120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.156 [2024-07-24 21:36:07.448136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:96960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.156 [2024-07-24 21:36:07.448150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.156 [2024-07-24 21:36:07.448166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:96968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.156 [2024-07-24 21:36:07.448181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.156 [2024-07-24 21:36:07.448197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:96976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.156 [2024-07-24 21:36:07.448211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.156 [2024-07-24 21:36:07.448228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:97368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.156 [2024-07-24 21:36:07.448243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.156 [2024-07-24 21:36:07.448258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:97376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.156 [2024-07-24 21:36:07.448273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.156 [2024-07-24 21:36:07.448289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:97384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.156 [2024-07-24 21:36:07.448303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.156 [2024-07-24 21:36:07.448319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:97392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.156 [2024-07-24 21:36:07.448334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.156 [2024-07-24 21:36:07.448350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:97400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.156 [2024-07-24 21:36:07.448364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.156 [2024-07-24 21:36:07.448380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:97408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.156 [2024-07-24 21:36:07.448394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.156 [2024-07-24 21:36:07.448410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:97416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.156 [2024-07-24 21:36:07.448433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.156 [2024-07-24 21:36:07.448465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:97424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.156 [2024-07-24 21:36:07.448494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.156 [2024-07-24 21:36:07.448512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:97432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.156 [2024-07-24 21:36:07.448526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.156 [2024-07-24 21:36:07.448540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:97440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.156 [2024-07-24 21:36:07.448553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.156 [2024-07-24 21:36:07.448567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:97448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.156 [2024-07-24 21:36:07.448580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.156 [2024-07-24 21:36:07.448596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:97456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.156 [2024-07-24 21:36:07.448610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.156 [2024-07-24 21:36:07.448624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:97464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.156 [2024-07-24 21:36:07.448669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.156 [2024-07-24 21:36:07.448684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:97472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.156 [2024-07-24 21:36:07.448698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.156 [2024-07-24 21:36:07.448713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:97480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.156 [2024-07-24 21:36:07.448727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.156 [2024-07-24 21:36:07.448788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:97488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.156 [2024-07-24 21:36:07.448805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.156 [2024-07-24 21:36:07.448821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:97496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.156 [2024-07-24 21:36:07.448836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.156 [2024-07-24 21:36:07.448852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:97504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.156 [2024-07-24 21:36:07.448867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.156 [2024-07-24 21:36:07.448882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:97512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.156 [2024-07-24 21:36:07.448897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.156 [2024-07-24 21:36:07.448978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:97520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.156 [2024-07-24 21:36:07.448995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.156 [2024-07-24 21:36:07.449012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:97528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.156 [2024-07-24 21:36:07.449027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.157 [2024-07-24 21:36:07.449043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:97536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.157 [2024-07-24 21:36:07.449057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.157 [2024-07-24 21:36:07.449073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:97544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.157 [2024-07-24 21:36:07.449088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.157 [2024-07-24 21:36:07.449104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:97552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.157 [2024-07-24 21:36:07.449119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.157 [2024-07-24 21:36:07.449135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:96984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.157 [2024-07-24 21:36:07.449150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.157 [2024-07-24 21:36:07.449166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:96992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.157 [2024-07-24 21:36:07.449193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.157 [2024-07-24 21:36:07.449209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:97000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.157 [2024-07-24 21:36:07.449224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.157 [2024-07-24 21:36:07.449242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:97008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.157 [2024-07-24 21:36:07.449257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.157 [2024-07-24 21:36:07.449273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:97016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.157 [2024-07-24 21:36:07.449288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.157 [2024-07-24 21:36:07.449304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:97024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.157 [2024-07-24 21:36:07.449318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.157 [2024-07-24 21:36:07.449334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:97032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.157 [2024-07-24 21:36:07.449349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.157 [2024-07-24 21:36:07.449365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:97040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.157 [2024-07-24 21:36:07.449387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.157 [2024-07-24 21:36:07.449404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:97048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.157 [2024-07-24 21:36:07.449419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.157 [2024-07-24 21:36:07.449435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:97056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.157 [2024-07-24 21:36:07.449464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.157 [2024-07-24 21:36:07.449510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:97064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.157 [2024-07-24 21:36:07.449523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.157 [2024-07-24 21:36:07.449538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:97072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.157 [2024-07-24 21:36:07.449551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.157 [2024-07-24 21:36:07.449566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:97080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.157 [2024-07-24 21:36:07.449580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.157 [2024-07-24 21:36:07.449594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:97088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.157 [2024-07-24 21:36:07.449608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.157 [2024-07-24 21:36:07.449622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:97096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.157 [2024-07-24 21:36:07.449658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.157 [2024-07-24 21:36:07.449673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:97104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.157 [2024-07-24 21:36:07.449687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.157 [2024-07-24 21:36:07.449712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:97560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.157 [2024-07-24 21:36:07.449729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.157 [2024-07-24 21:36:07.449745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:97568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.157 [2024-07-24 21:36:07.449760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.157 [2024-07-24 21:36:07.449774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:97576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.157 [2024-07-24 21:36:07.449789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.157 [2024-07-24 21:36:07.449804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:97584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.157 [2024-07-24 21:36:07.449819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.157 [2024-07-24 21:36:07.449834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:97592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.157 [2024-07-24 21:36:07.449855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.157 [2024-07-24 21:36:07.449870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:97600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.157 [2024-07-24 21:36:07.449885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.157 [2024-07-24 21:36:07.449900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:97608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.157 [2024-07-24 21:36:07.449914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.157 [2024-07-24 21:36:07.449929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:97616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.157 [2024-07-24 21:36:07.449943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.157 [2024-07-24 21:36:07.449958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:97624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.157 [2024-07-24 21:36:07.449972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.157 [2024-07-24 21:36:07.450003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:97632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.157 [2024-07-24 21:36:07.450032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.157 [2024-07-24 21:36:07.450047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:97640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.157 [2024-07-24 21:36:07.450061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.157 [2024-07-24 21:36:07.450076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:97648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.157 [2024-07-24 21:36:07.450106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.157 [2024-07-24 21:36:07.450122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:97112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.157 [2024-07-24 21:36:07.450137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.157 [2024-07-24 21:36:07.450153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:97120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.157 [2024-07-24 21:36:07.450167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.157 [2024-07-24 21:36:07.450184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:97128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.157 [2024-07-24 21:36:07.450199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.157 [2024-07-24 21:36:07.450214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:97136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.158 [2024-07-24 21:36:07.450229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.158 [2024-07-24 21:36:07.450245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:97144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.158 [2024-07-24 21:36:07.450260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.158 [2024-07-24 21:36:07.450283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:97152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.158 [2024-07-24 21:36:07.450299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.158 [2024-07-24 21:36:07.450315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:97160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.158 [2024-07-24 21:36:07.450331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.158 [2024-07-24 21:36:07.450347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:97168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.158 [2024-07-24 21:36:07.450362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.158 [2024-07-24 21:36:07.450378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:97176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.158 [2024-07-24 21:36:07.450393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.158 [2024-07-24 21:36:07.450409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:97184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.158 [2024-07-24 21:36:07.450424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.158 [2024-07-24 21:36:07.450450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:97192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.158 [2024-07-24 21:36:07.450465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.158 [2024-07-24 21:36:07.450495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:97200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.158 [2024-07-24 21:36:07.450509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.158 [2024-07-24 21:36:07.450525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:97208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.158 [2024-07-24 21:36:07.450538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.158 [2024-07-24 21:36:07.450553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:97216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.158 [2024-07-24 21:36:07.450567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.158 [2024-07-24 21:36:07.450582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:97224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.158 [2024-07-24 21:36:07.450596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.158 [2024-07-24 21:36:07.450611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:97232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.158 [2024-07-24 21:36:07.450625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.158 [2024-07-24 21:36:07.450656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:97656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.158 [2024-07-24 21:36:07.450671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.158 [2024-07-24 21:36:07.450686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:97664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.158 [2024-07-24 21:36:07.450735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.158 [2024-07-24 21:36:07.450753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:97672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.158 [2024-07-24 21:36:07.450769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.158 [2024-07-24 21:36:07.450785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:97680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.158 [2024-07-24 21:36:07.450811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.158 [2024-07-24 21:36:07.450830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:97688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.158 [2024-07-24 21:36:07.450845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.158 [2024-07-24 21:36:07.450861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:97696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.158 [2024-07-24 21:36:07.450876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.158 [2024-07-24 21:36:07.450891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:97704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.158 [2024-07-24 21:36:07.450907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.158 [2024-07-24 21:36:07.450923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:97712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.158 [2024-07-24 21:36:07.450938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.158 [2024-07-24 21:36:07.450954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:97720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.158 [2024-07-24 21:36:07.450968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.158 [2024-07-24 21:36:07.450984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:97728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.158 [2024-07-24 21:36:07.450998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.158 [2024-07-24 21:36:07.451015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:97736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.158 [2024-07-24 21:36:07.451030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.158 [2024-07-24 21:36:07.451045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:97744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.158 [2024-07-24 21:36:07.451060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.158 [2024-07-24 21:36:07.451076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:97752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.158 [2024-07-24 21:36:07.451091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.158 [2024-07-24 21:36:07.451107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:97760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.158 [2024-07-24 21:36:07.451122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.158 [2024-07-24 21:36:07.451144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:97768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.158 [2024-07-24 21:36:07.451160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.158 [2024-07-24 21:36:07.451176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.158 [2024-07-24 21:36:07.451191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.158 [2024-07-24 21:36:07.451207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:97784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.158 [2024-07-24 21:36:07.451222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.158 [2024-07-24 21:36:07.451237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:97792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.158 [2024-07-24 21:36:07.451252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.158 [2024-07-24 21:36:07.451268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:97240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.158 [2024-07-24 21:36:07.451283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.158 [2024-07-24 21:36:07.451299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:97248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.158 [2024-07-24 21:36:07.451313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.158 [2024-07-24 21:36:07.451329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:97256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.158 [2024-07-24 21:36:07.451344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.158 [2024-07-24 21:36:07.451360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:97264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.158 [2024-07-24 21:36:07.451374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.158 [2024-07-24 21:36:07.451390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:97272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.158 [2024-07-24 21:36:07.451427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.158 [2024-07-24 21:36:07.451458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:97280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.158 [2024-07-24 21:36:07.451472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.158 [2024-07-24 21:36:07.451487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:97288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.158 [2024-07-24 21:36:07.451501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.158 [2024-07-24 21:36:07.451515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:97296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.158 [2024-07-24 21:36:07.451529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.158 [2024-07-24 21:36:07.451544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:97800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.158 [2024-07-24 21:36:07.451558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.159 [2024-07-24 21:36:07.451579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:97808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.159 [2024-07-24 21:36:07.451593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.159 [2024-07-24 21:36:07.451608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:97816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.159 [2024-07-24 21:36:07.451622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.159 [2024-07-24 21:36:07.451653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:97824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.159 [2024-07-24 21:36:07.451678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.159 [2024-07-24 21:36:07.451695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:97832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.159 [2024-07-24 21:36:07.451710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.159 [2024-07-24 21:36:07.451725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:97840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.159 [2024-07-24 21:36:07.451740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.159 [2024-07-24 21:36:07.451755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:97848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.159 [2024-07-24 21:36:07.451770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.159 [2024-07-24 21:36:07.451785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:97856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.159 [2024-07-24 21:36:07.451799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.159 [2024-07-24 21:36:07.451815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:97864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.159 [2024-07-24 21:36:07.451829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.159 [2024-07-24 21:36:07.451845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:97872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.159 [2024-07-24 21:36:07.451859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.159 [2024-07-24 21:36:07.451875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:97880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.159 [2024-07-24 21:36:07.451889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.159 [2024-07-24 21:36:07.451905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:97888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.159 [2024-07-24 21:36:07.451919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.159 [2024-07-24 21:36:07.451935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:97896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.159 [2024-07-24 21:36:07.451955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.159 [2024-07-24 21:36:07.451971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:97904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.159 [2024-07-24 21:36:07.452032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.159 [2024-07-24 21:36:07.452049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:97912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.159 [2024-07-24 21:36:07.452064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.159 [2024-07-24 21:36:07.452079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:97920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.159 [2024-07-24 21:36:07.452110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.159 [2024-07-24 21:36:07.452125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:97928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.159 [2024-07-24 21:36:07.452140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.159 [2024-07-24 21:36:07.452156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:97936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.159 [2024-07-24 21:36:07.452171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.159 [2024-07-24 21:36:07.452187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:97304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.159 [2024-07-24 21:36:07.452202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.159 [2024-07-24 21:36:07.452218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:97312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.159 [2024-07-24 21:36:07.452232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.159 [2024-07-24 21:36:07.452248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:97320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.159 [2024-07-24 21:36:07.452263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.159 [2024-07-24 21:36:07.452279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:97328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.159 [2024-07-24 21:36:07.452294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.159 [2024-07-24 21:36:07.452310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:97336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.159 [2024-07-24 21:36:07.452324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.159 [2024-07-24 21:36:07.452341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:97344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.159 [2024-07-24 21:36:07.452355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.159 [2024-07-24 21:36:07.452371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:97352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.159 [2024-07-24 21:36:07.452386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.159 [2024-07-24 21:36:07.452437] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:29.159 [2024-07-24 21:36:07.452468] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:29.159 [2024-07-24 21:36:07.452479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97360 len:8 PRP1 0x0 PRP2 0x0 00:14:29.159 [2024-07-24 21:36:07.452504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.159 [2024-07-24 21:36:07.452578] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20b0a10 was disconnected and freed. reset controller. 00:14:29.159 [2024-07-24 21:36:07.452596] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:14:29.159 [2024-07-24 21:36:07.452666] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:29.159 [2024-07-24 21:36:07.452686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.159 [2024-07-24 21:36:07.452716] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:29.159 [2024-07-24 21:36:07.452732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.159 [2024-07-24 21:36:07.452747] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:29.159 [2024-07-24 21:36:07.452761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.159 [2024-07-24 21:36:07.452776] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:29.159 [2024-07-24 21:36:07.452790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.159 [2024-07-24 21:36:07.452804] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:14:29.159 [2024-07-24 21:36:07.452856] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2041570 (9): Bad file descriptor 00:14:29.159 [2024-07-24 21:36:07.456888] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:14:29.159 [2024-07-24 21:36:07.492603] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:29.159 00:14:29.159 Latency(us) 00:14:29.159 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:29.159 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:29.159 Verification LBA range: start 0x0 length 0x4000 00:14:29.159 NVMe0n1 : 15.01 9420.74 36.80 214.64 0.00 13253.75 584.61 15966.95 00:14:29.159 =================================================================================================================== 00:14:29.159 Total : 9420.74 36.80 214.64 0.00 13253.75 584.61 15966.95 00:14:29.159 Received shutdown signal, test time was about 15.000000 seconds 00:14:29.159 00:14:29.159 Latency(us) 00:14:29.159 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:29.159 =================================================================================================================== 00:14:29.159 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:29.159 21:36:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:14:29.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:29.159 21:36:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:14:29.159 21:36:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:14:29.159 21:36:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=74945 00:14:29.159 21:36:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:14:29.159 21:36:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 74945 /var/tmp/bdevperf.sock 00:14:29.159 21:36:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 74945 ']' 00:14:29.160 21:36:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:29.160 21:36:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:29.160 21:36:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:29.160 21:36:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:29.160 21:36:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:14:29.160 21:36:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:29.160 21:36:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:14:29.160 21:36:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:14:29.160 [2024-07-24 21:36:14.092617] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:14:29.160 21:36:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:14:29.418 [2024-07-24 21:36:14.369025] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:14:29.418 21:36:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:29.983 NVMe0n1 00:14:29.983 21:36:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:29.983 00:14:30.241 21:36:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:30.499 00:14:30.499 21:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:14:30.499 21:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:14:30.757 21:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:31.015 21:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:14:34.294 21:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:14:34.294 21:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:14:34.294 21:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=75013 00:14:34.294 21:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:34.294 21:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 75013 00:14:35.227 0 00:14:35.227 21:36:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:14:35.228 [2024-07-24 21:36:13.518045] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:14:35.228 [2024-07-24 21:36:13.518142] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74945 ] 00:14:35.228 [2024-07-24 21:36:13.647199] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:35.228 [2024-07-24 21:36:13.745048] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:35.228 [2024-07-24 21:36:13.800809] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:35.228 [2024-07-24 21:36:15.743142] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:14:35.228 [2024-07-24 21:36:15.743308] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:35.228 [2024-07-24 21:36:15.743335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.228 [2024-07-24 21:36:15.743353] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:35.228 [2024-07-24 21:36:15.743366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.228 [2024-07-24 21:36:15.743380] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:35.228 [2024-07-24 21:36:15.743393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.228 [2024-07-24 21:36:15.743406] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:35.228 [2024-07-24 21:36:15.743418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.228 [2024-07-24 21:36:15.743431] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:14:35.228 [2024-07-24 21:36:15.743477] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:14:35.228 [2024-07-24 21:36:15.743506] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1567570 (9): Bad file descriptor 00:14:35.228 [2024-07-24 21:36:15.747653] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:35.228 Running I/O for 1 seconds... 00:14:35.228 00:14:35.228 Latency(us) 00:14:35.228 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:35.228 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:35.228 Verification LBA range: start 0x0 length 0x4000 00:14:35.228 NVMe0n1 : 1.00 6772.25 26.45 0.00 0.00 18823.08 2263.97 15966.95 00:14:35.228 =================================================================================================================== 00:14:35.228 Total : 6772.25 26.45 0.00 0.00 18823.08 2263.97 15966.95 00:14:35.228 21:36:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:14:35.228 21:36:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:14:35.486 21:36:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:35.744 21:36:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:14:35.744 21:36:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:14:36.002 21:36:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:36.259 21:36:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:14:39.540 21:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:14:39.540 21:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:14:39.540 21:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 74945 00:14:39.540 21:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 74945 ']' 00:14:39.540 21:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 74945 00:14:39.540 21:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:14:39.540 21:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:39.540 21:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74945 00:14:39.540 killing process with pid 74945 00:14:39.540 21:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:39.540 21:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:39.540 21:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74945' 00:14:39.540 21:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 74945 00:14:39.540 21:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 74945 00:14:39.797 21:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:14:39.797 21:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:40.054 21:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:14:40.054 21:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:14:40.054 21:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:14:40.054 21:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:40.054 21:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:14:40.054 21:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:40.054 21:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:14:40.054 21:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:40.054 21:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:40.054 rmmod nvme_tcp 00:14:40.054 rmmod nvme_fabrics 00:14:40.054 rmmod nvme_keyring 00:14:40.054 21:36:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:40.054 21:36:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:14:40.054 21:36:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:14:40.054 21:36:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 74684 ']' 00:14:40.054 21:36:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 74684 00:14:40.054 21:36:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 74684 ']' 00:14:40.054 21:36:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 74684 00:14:40.054 21:36:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:14:40.054 21:36:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:40.054 21:36:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74684 00:14:40.312 killing process with pid 74684 00:14:40.312 21:36:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:40.312 21:36:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:40.312 21:36:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74684' 00:14:40.312 21:36:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 74684 00:14:40.312 21:36:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 74684 00:14:40.570 21:36:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:40.570 21:36:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:40.570 21:36:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:40.570 21:36:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:40.570 21:36:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:40.570 21:36:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:40.570 21:36:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:40.570 21:36:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:40.570 21:36:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:40.570 00:14:40.571 real 0m32.122s 00:14:40.571 user 2m3.896s 00:14:40.571 sys 0m5.632s 00:14:40.571 21:36:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:40.571 21:36:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:14:40.571 ************************************ 00:14:40.571 END TEST nvmf_failover 00:14:40.571 ************************************ 00:14:40.571 21:36:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:14:40.571 21:36:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:40.571 21:36:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:40.571 21:36:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:14:40.571 ************************************ 00:14:40.571 START TEST nvmf_host_discovery 00:14:40.571 ************************************ 00:14:40.571 21:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:14:40.571 * Looking for test storage... 00:14:40.571 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:40.571 21:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:40.571 21:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:14:40.571 21:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:40.571 21:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:40.571 21:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:40.571 21:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:40.571 21:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:40.571 21:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:40.571 21:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:40.571 21:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:40.571 21:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:40.571 21:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:40.571 21:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:14:40.571 21:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:14:40.571 21:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:40.571 21:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:40.571 21:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:40.571 21:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:40.571 21:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:40.571 21:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:40.571 21:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:40.571 21:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:40.571 21:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:40.571 21:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:40.571 21:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:40.571 21:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:14:40.571 21:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:40.571 21:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:14:40.571 21:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:40.571 21:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:40.571 21:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:40.571 21:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:40.571 21:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:40.571 21:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:40.571 21:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:40.571 21:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:40.571 21:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:14:40.571 21:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:14:40.571 21:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:14:40.571 21:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:14:40.571 21:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:14:40.571 21:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:14:40.571 21:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:14:40.571 21:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:40.571 21:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:40.571 21:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:40.571 21:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:40.571 21:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:40.571 21:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:40.571 21:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:40.571 21:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:40.571 21:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:40.571 21:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:40.571 21:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:40.571 21:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:40.571 21:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:40.571 21:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:40.571 21:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:40.571 21:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:40.571 21:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:40.571 21:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:40.571 21:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:40.571 21:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:40.571 21:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:40.571 21:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:40.571 21:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:40.571 21:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:40.571 21:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:40.571 21:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:40.571 21:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:40.571 21:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:40.571 Cannot find device "nvmf_tgt_br" 00:14:40.571 21:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@155 -- # true 00:14:40.571 21:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:40.571 Cannot find device "nvmf_tgt_br2" 00:14:40.571 21:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # true 00:14:40.571 21:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:40.571 21:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:40.571 Cannot find device "nvmf_tgt_br" 00:14:40.571 21:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@158 -- # true 00:14:40.571 21:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:40.829 Cannot find device "nvmf_tgt_br2" 00:14:40.829 21:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # true 00:14:40.829 21:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:40.829 21:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:40.829 21:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:40.829 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:40.829 21:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:14:40.829 21:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:40.829 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:40.829 21:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:14:40.829 21:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:40.829 21:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:40.829 21:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:40.829 21:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:40.829 21:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:40.829 21:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:40.829 21:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:40.829 21:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:40.829 21:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:40.829 21:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:40.829 21:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:40.829 21:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:40.829 21:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:40.829 21:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:40.829 21:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:40.829 21:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:40.829 21:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:40.829 21:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:40.829 21:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:40.829 21:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:40.829 21:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:40.829 21:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:40.829 21:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:40.829 21:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:40.829 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:40.829 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.055 ms 00:14:40.829 00:14:40.829 --- 10.0.0.2 ping statistics --- 00:14:40.829 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:40.829 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:14:40.829 21:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:40.829 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:40.829 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:14:40.829 00:14:40.829 --- 10.0.0.3 ping statistics --- 00:14:40.829 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:40.829 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:14:40.829 21:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:40.829 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:40.829 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:14:40.829 00:14:40.829 --- 10.0.0.1 ping statistics --- 00:14:40.829 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:40.829 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:14:40.829 21:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:40.829 21:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@433 -- # return 0 00:14:40.829 21:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:40.829 21:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:40.829 21:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:40.830 21:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:40.830 21:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:40.830 21:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:40.830 21:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:41.087 21:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:14:41.087 21:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:41.087 21:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:41.087 21:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:41.087 21:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=75278 00:14:41.087 21:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 75278 00:14:41.087 21:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 75278 ']' 00:14:41.087 21:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:41.087 21:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:41.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:41.087 21:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:41.087 21:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:41.087 21:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:41.087 21:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:41.087 [2024-07-24 21:36:25.911438] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:14:41.087 [2024-07-24 21:36:25.911551] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:41.087 [2024-07-24 21:36:26.052603] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:41.345 [2024-07-24 21:36:26.167974] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:41.345 [2024-07-24 21:36:26.168029] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:41.345 [2024-07-24 21:36:26.168041] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:41.345 [2024-07-24 21:36:26.168049] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:41.345 [2024-07-24 21:36:26.168057] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:41.345 [2024-07-24 21:36:26.168093] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:41.345 [2024-07-24 21:36:26.226458] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:41.910 21:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:41.910 21:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:14:41.910 21:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:41.910 21:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:41.910 21:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:42.168 21:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:42.168 21:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:42.168 21:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.168 21:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:42.168 [2024-07-24 21:36:26.930133] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:42.168 21:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.168 21:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:14:42.168 21:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.168 21:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:42.168 [2024-07-24 21:36:26.938257] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:14:42.168 21:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.168 21:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:14:42.168 21:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.168 21:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:42.168 null0 00:14:42.168 21:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.168 21:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:14:42.168 21:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.168 21:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:42.168 null1 00:14:42.168 21:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.168 21:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:14:42.168 21:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.168 21:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:42.168 21:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.168 21:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=75310 00:14:42.168 21:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:14:42.168 21:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 75310 /tmp/host.sock 00:14:42.168 21:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 75310 ']' 00:14:42.168 21:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:14:42.168 21:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:42.168 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:14:42.168 21:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:14:42.168 21:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:42.168 21:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:42.168 [2024-07-24 21:36:27.028364] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:14:42.168 [2024-07-24 21:36:27.028466] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75310 ] 00:14:42.168 [2024-07-24 21:36:27.163650] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:42.426 [2024-07-24 21:36:27.294271] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:42.426 [2024-07-24 21:36:27.348250] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:42.992 21:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:42.992 21:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:14:42.992 21:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:42.992 21:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:14:42.992 21:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.992 21:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:42.992 21:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.992 21:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:14:42.992 21:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.992 21:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:42.992 21:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.992 21:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:14:42.992 21:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:14:42.992 21:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:14:42.992 21:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.992 21:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:42.992 21:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:14:42.992 21:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:14:42.992 21:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:14:42.992 21:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.250 21:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:14:43.250 21:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:14:43.250 21:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:43.250 21:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:14:43.250 21:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:14:43.250 21:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:14:43.250 21:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.250 21:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:43.250 21:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.250 21:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:14:43.250 21:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:14:43.250 21:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.250 21:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:43.250 21:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.250 21:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:14:43.250 21:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:14:43.250 21:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:14:43.250 21:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:14:43.250 21:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:14:43.250 21:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.250 21:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:43.250 21:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.250 21:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:14:43.250 21:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:14:43.250 21:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:14:43.250 21:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:43.250 21:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:14:43.250 21:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.250 21:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:43.250 21:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:14:43.250 21:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.250 21:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:14:43.250 21:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:14:43.250 21:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.250 21:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:43.250 21:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.250 21:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:14:43.250 21:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:14:43.250 21:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.250 21:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:43.250 21:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:14:43.250 21:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:14:43.250 21:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:14:43.250 21:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.250 21:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:14:43.250 21:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:14:43.250 21:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:14:43.250 21:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:43.250 21:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.250 21:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:43.250 21:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:14:43.250 21:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:14:43.509 21:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.509 21:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:14:43.509 21:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:43.509 21:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.509 21:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:43.509 [2024-07-24 21:36:28.302824] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:43.509 21:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.509 21:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:14:43.509 21:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:14:43.509 21:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:14:43.509 21:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:14:43.509 21:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:14:43.509 21:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.509 21:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:43.509 21:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.509 21:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:14:43.509 21:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:14:43.509 21:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:14:43.509 21:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:43.509 21:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:14:43.509 21:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.509 21:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:14:43.509 21:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:43.509 21:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.509 21:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:14:43.509 21:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:14:43.509 21:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:14:43.509 21:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:14:43.509 21:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:14:43.509 21:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:14:43.509 21:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:14:43.509 21:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:14:43.509 21:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:14:43.509 21:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:14:43.509 21:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:14:43.509 21:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.509 21:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:43.509 21:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.509 21:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:14:43.509 21:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:14:43.509 21:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:14:43.509 21:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:14:43.509 21:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:14:43.509 21:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.509 21:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:43.509 21:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.509 21:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:14:43.509 21:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:14:43.509 21:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:14:43.509 21:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:14:43.509 21:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:14:43.509 21:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:14:43.509 21:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:14:43.509 21:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:14:43.509 21:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:14:43.509 21:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.509 21:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:43.509 21:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:14:43.768 21:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.768 21:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == \n\v\m\e\0 ]] 00:14:43.768 21:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:14:44.026 [2024-07-24 21:36:28.953638] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:14:44.026 [2024-07-24 21:36:28.953689] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:14:44.026 [2024-07-24 21:36:28.953709] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:14:44.026 [2024-07-24 21:36:28.959720] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:14:44.026 [2024-07-24 21:36:29.017109] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:14:44.026 [2024-07-24 21:36:29.017152] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:14:44.591 21:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:14:44.592 21:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:14:44.592 21:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:14:44.592 21:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:14:44.592 21:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.592 21:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:14:44.592 21:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:44.592 21:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:14:44.592 21:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:14:44.592 21:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.850 21:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:44.850 21:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:14:44.850 21:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:14:44.850 21:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:14:44.850 21:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:14:44.850 21:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:14:44.850 21:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:14:44.850 21:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:14:44.850 21:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:44.850 21:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:14:44.850 21:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.850 21:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:44.850 21:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:14:44.850 21:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:14:44.850 21:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.850 21:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:14:44.850 21:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:14:44.850 21:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:14:44.850 21:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:14:44.850 21:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:14:44.850 21:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:14:44.850 21:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:14:44.850 21:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:14:44.850 21:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:14:44.850 21:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:14:44.850 21:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.850 21:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:44.850 21:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:14:44.850 21:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:14:44.850 21:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.850 21:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0 ]] 00:14:44.850 21:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:14:44.850 21:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:14:44.850 21:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:14:44.850 21:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:14:44.850 21:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:14:44.850 21:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:14:44.850 21:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:14:44.850 21:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:14:44.850 21:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:14:44.850 21:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:14:44.850 21:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:14:44.850 21:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.850 21:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:44.850 21:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.850 21:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:14:44.850 21:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:14:44.850 21:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:14:44.850 21:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:14:44.850 21:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:14:44.850 21:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.850 21:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:44.850 21:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.850 21:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:14:44.850 21:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:14:44.850 21:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:14:44.850 21:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:14:44.850 21:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:14:44.850 21:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:14:44.850 21:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:14:44.850 21:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:44.850 21:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.850 21:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:14:44.850 21:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:44.850 21:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:14:44.850 21:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.850 21:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:14:44.850 21:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:14:44.850 21:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:14:44.850 21:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:14:44.850 21:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:14:44.850 21:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:14:44.850 21:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:14:44.850 21:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:14:44.850 21:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:14:44.850 21:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:14:44.850 21:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:14:44.850 21:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:14:44.850 21:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.850 21:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:45.109 21:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.109 21:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:14:45.109 21:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:14:45.109 21:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:14:45.109 21:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:14:45.109 21:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:14:45.109 21:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.109 21:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:45.109 [2024-07-24 21:36:29.900196] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:14:45.109 [2024-07-24 21:36:29.900472] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:14:45.109 [2024-07-24 21:36:29.900509] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:14:45.109 21:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.109 21:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:14:45.109 21:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:14:45.109 21:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:14:45.109 21:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:14:45.109 21:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:14:45.109 [2024-07-24 21:36:29.906457] bdev_nvme.c:6935:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:14:45.109 21:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:14:45.109 21:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:14:45.109 21:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.109 21:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:45.109 21:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:14:45.109 21:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:14:45.109 21:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:14:45.109 21:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.109 21:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:45.109 21:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:14:45.109 21:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:14:45.109 21:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:14:45.109 21:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:14:45.109 21:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:14:45.109 21:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:14:45.109 [2024-07-24 21:36:29.966873] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:14:45.109 [2024-07-24 21:36:29.966903] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:14:45.109 [2024-07-24 21:36:29.966912] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:14:45.109 21:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:14:45.110 21:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:45.110 21:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.110 21:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:14:45.110 21:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:45.110 21:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:14:45.110 21:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:14:45.110 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.110 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:14:45.110 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:14:45.110 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:14:45.110 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:14:45.110 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:14:45.110 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:14:45.110 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:14:45.110 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:14:45.110 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:14:45.110 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.110 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:45.110 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:14:45.110 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:14:45.110 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:14:45.110 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.110 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:14:45.110 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:14:45.110 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:14:45.110 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:14:45.110 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:14:45.110 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:14:45.110 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:14:45.110 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:14:45.110 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:14:45.110 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:14:45.110 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:14:45.110 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:14:45.110 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.110 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:45.110 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.369 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:14:45.369 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:14:45.369 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:14:45.369 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:14:45.369 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:45.369 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.369 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:45.369 [2024-07-24 21:36:30.136769] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:14:45.369 [2024-07-24 21:36:30.136825] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:14:45.369 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.369 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:14:45.369 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:14:45.369 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:14:45.369 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:14:45.369 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:14:45.369 [2024-07-24 21:36:30.142713] bdev_nvme.c:6798:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:14:45.369 [2024-07-24 21:36:30.142754] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:14:45.369 [2024-07-24 21:36:30.142914] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:45.369 [2024-07-24 21:36:30.142950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:45.369 [2024-07-24 21:36:30.142964] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:45.369 [2024-07-24 21:36:30.142974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:45.369 [2024-07-24 21:36:30.142984] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:45.369 [2024-07-24 21:36:30.143003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:45.369 [2024-07-24 21:36:30.143013] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:45.369 [2024-07-24 21:36:30.143022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:45.369 [2024-07-24 21:36:30.143031] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aff620 is same with the state(5) to be set 00:14:45.369 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:14:45.369 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:14:45.369 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:14:45.369 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.369 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:45.369 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:14:45.369 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:14:45.369 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.369 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:45.369 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:14:45.369 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:14:45.369 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:14:45.369 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:14:45.369 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:14:45.369 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:14:45.369 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:14:45.369 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:45.369 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:14:45.369 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.369 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:45.369 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:14:45.369 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:14:45.369 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.369 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:14:45.369 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:14:45.369 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:14:45.369 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:14:45.369 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:14:45.369 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:14:45.369 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:14:45.369 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:14:45.369 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:14:45.369 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:14:45.369 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.369 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:45.369 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:14:45.369 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:14:45.369 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.369 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4421 == \4\4\2\1 ]] 00:14:45.369 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:14:45.369 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:14:45.369 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:14:45.369 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:14:45.369 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:14:45.369 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:14:45.369 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:14:45.369 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:14:45.369 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:14:45.369 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:14:45.369 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:14:45.369 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.369 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:45.369 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.369 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:14:45.369 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:14:45.369 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:14:45.369 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:14:45.369 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:14:45.369 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.369 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:45.627 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.627 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:14:45.627 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:14:45.627 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:14:45.627 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:14:45.627 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:14:45.627 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:14:45.627 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:14:45.627 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:14:45.627 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.627 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:14:45.627 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:14:45.627 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:45.627 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.627 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:14:45.627 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:14:45.627 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:14:45.627 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:14:45.627 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:14:45.627 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:14:45.627 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:14:45.627 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:14:45.627 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:45.627 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.627 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:14:45.627 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:45.627 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:14:45.627 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:14:45.627 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.627 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:14:45.627 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:14:45.627 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:14:45.627 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:14:45.627 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:14:45.627 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:14:45.627 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:14:45.627 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:14:45.627 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:14:45.627 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:14:45.627 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:14:45.627 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:14:45.627 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.627 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:45.627 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.627 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:14:45.627 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:14:45.627 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:14:45.627 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:14:45.627 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:14:45.627 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.627 21:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:47.000 [2024-07-24 21:36:31.564463] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:14:47.001 [2024-07-24 21:36:31.564520] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:14:47.001 [2024-07-24 21:36:31.564541] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:14:47.001 [2024-07-24 21:36:31.570513] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:14:47.001 [2024-07-24 21:36:31.631376] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:14:47.001 [2024-07-24 21:36:31.631432] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:14:47.001 21:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.001 21:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:14:47.001 21:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:14:47.001 21:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:14:47.001 21:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:14:47.001 21:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:47.001 21:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:14:47.001 21:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:47.001 21:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:14:47.001 21:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.001 21:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:47.001 request: 00:14:47.001 { 00:14:47.001 "name": "nvme", 00:14:47.001 "trtype": "tcp", 00:14:47.001 "traddr": "10.0.0.2", 00:14:47.001 "adrfam": "ipv4", 00:14:47.001 "trsvcid": "8009", 00:14:47.001 "hostnqn": "nqn.2021-12.io.spdk:test", 00:14:47.001 "wait_for_attach": true, 00:14:47.001 "method": "bdev_nvme_start_discovery", 00:14:47.001 "req_id": 1 00:14:47.001 } 00:14:47.001 Got JSON-RPC error response 00:14:47.001 response: 00:14:47.001 { 00:14:47.001 "code": -17, 00:14:47.001 "message": "File exists" 00:14:47.001 } 00:14:47.001 21:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:47.001 21:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:14:47.001 21:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:47.001 21:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:47.001 21:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:47.001 21:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:14:47.001 21:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:14:47.001 21:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:14:47.001 21:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.001 21:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:47.001 21:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:14:47.001 21:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:14:47.001 21:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.001 21:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:14:47.001 21:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:14:47.001 21:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:47.001 21:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.001 21:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:47.001 21:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:14:47.001 21:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:14:47.001 21:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:14:47.001 21:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.001 21:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:14:47.001 21:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:14:47.001 21:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:14:47.001 21:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:14:47.001 21:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:14:47.001 21:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:47.001 21:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:14:47.001 21:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:47.001 21:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:14:47.001 21:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.001 21:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:47.001 request: 00:14:47.001 { 00:14:47.001 "name": "nvme_second", 00:14:47.001 "trtype": "tcp", 00:14:47.001 "traddr": "10.0.0.2", 00:14:47.001 "adrfam": "ipv4", 00:14:47.001 "trsvcid": "8009", 00:14:47.001 "hostnqn": "nqn.2021-12.io.spdk:test", 00:14:47.001 "wait_for_attach": true, 00:14:47.001 "method": "bdev_nvme_start_discovery", 00:14:47.001 "req_id": 1 00:14:47.001 } 00:14:47.001 Got JSON-RPC error response 00:14:47.001 response: 00:14:47.001 { 00:14:47.001 "code": -17, 00:14:47.001 "message": "File exists" 00:14:47.001 } 00:14:47.001 21:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:47.001 21:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:14:47.001 21:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:47.001 21:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:47.001 21:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:47.001 21:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:14:47.001 21:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:14:47.001 21:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:14:47.001 21:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.001 21:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:14:47.001 21:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:47.001 21:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:14:47.001 21:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.001 21:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:14:47.001 21:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:14:47.001 21:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:47.001 21:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.001 21:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:14:47.001 21:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:47.001 21:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:14:47.001 21:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:14:47.001 21:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.001 21:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:14:47.001 21:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:14:47.001 21:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:14:47.001 21:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:14:47.001 21:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:14:47.001 21:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:47.001 21:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:14:47.001 21:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:47.001 21:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:14:47.001 21:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.001 21:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:47.935 [2024-07-24 21:36:32.911981] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:14:47.935 [2024-07-24 21:36:32.912050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b3bc30 with addr=10.0.0.2, port=8010 00:14:47.935 [2024-07-24 21:36:32.912079] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:14:47.935 [2024-07-24 21:36:32.912090] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:14:47.935 [2024-07-24 21:36:32.912100] bdev_nvme.c:7073:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:14:49.308 [2024-07-24 21:36:33.911991] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:14:49.308 [2024-07-24 21:36:33.912067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b3bc30 with addr=10.0.0.2, port=8010 00:14:49.308 [2024-07-24 21:36:33.912095] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:14:49.308 [2024-07-24 21:36:33.912105] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:14:49.308 [2024-07-24 21:36:33.912114] bdev_nvme.c:7073:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:14:50.245 [2024-07-24 21:36:34.911818] bdev_nvme.c:7054:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:14:50.245 request: 00:14:50.245 { 00:14:50.245 "name": "nvme_second", 00:14:50.245 "trtype": "tcp", 00:14:50.245 "traddr": "10.0.0.2", 00:14:50.245 "adrfam": "ipv4", 00:14:50.245 "trsvcid": "8010", 00:14:50.245 "hostnqn": "nqn.2021-12.io.spdk:test", 00:14:50.245 "wait_for_attach": false, 00:14:50.245 "attach_timeout_ms": 3000, 00:14:50.245 "method": "bdev_nvme_start_discovery", 00:14:50.245 "req_id": 1 00:14:50.245 } 00:14:50.245 Got JSON-RPC error response 00:14:50.245 response: 00:14:50.245 { 00:14:50.245 "code": -110, 00:14:50.245 "message": "Connection timed out" 00:14:50.245 } 00:14:50.245 21:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:50.245 21:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:14:50.245 21:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:50.245 21:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:50.245 21:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:50.245 21:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:14:50.245 21:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:14:50.245 21:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.245 21:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:50.245 21:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:14:50.245 21:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:14:50.245 21:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:14:50.245 21:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.245 21:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:14:50.245 21:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:14:50.245 21:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 75310 00:14:50.245 21:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:14:50.245 21:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:50.245 21:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:14:50.245 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:50.245 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:14:50.245 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:50.245 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:50.245 rmmod nvme_tcp 00:14:50.245 rmmod nvme_fabrics 00:14:50.245 rmmod nvme_keyring 00:14:50.245 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:50.245 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:14:50.245 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:14:50.245 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 75278 ']' 00:14:50.245 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 75278 00:14:50.245 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@950 -- # '[' -z 75278 ']' 00:14:50.245 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # kill -0 75278 00:14:50.245 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # uname 00:14:50.245 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:50.245 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75278 00:14:50.245 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:50.245 killing process with pid 75278 00:14:50.245 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:50.245 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75278' 00:14:50.245 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@969 -- # kill 75278 00:14:50.245 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@974 -- # wait 75278 00:14:50.519 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:50.519 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:50.519 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:50.519 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:50.519 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:50.519 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:50.519 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:50.519 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:50.519 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:50.519 00:14:50.519 real 0m10.055s 00:14:50.519 user 0m19.177s 00:14:50.519 sys 0m2.138s 00:14:50.519 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:50.519 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:50.519 ************************************ 00:14:50.519 END TEST nvmf_host_discovery 00:14:50.519 ************************************ 00:14:50.519 21:36:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:14:50.519 21:36:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:50.519 21:36:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:50.519 21:36:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:14:50.790 ************************************ 00:14:50.790 START TEST nvmf_host_multipath_status 00:14:50.790 ************************************ 00:14:50.790 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:14:50.790 * Looking for test storage... 00:14:50.790 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:50.790 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:50.790 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:14:50.790 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:50.790 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:50.790 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:50.790 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:50.790 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:50.790 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:50.790 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:50.790 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:50.790 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:50.790 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:50.790 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:14:50.790 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:14:50.790 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:50.790 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:50.790 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:50.790 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:50.790 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:50.790 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:50.790 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:50.790 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:50.790 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.790 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.790 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.790 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:14:50.790 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.790 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:14:50.790 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:50.790 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:50.790 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:50.790 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:50.790 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:50.790 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:50.790 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:50.790 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:50.790 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:50.790 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:50.791 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:50.791 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:14:50.791 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:50.791 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:14:50.791 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:14:50.791 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:50.791 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:50.791 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:50.791 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:50.791 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:50.791 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:50.791 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:50.791 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:50.791 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:50.791 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:50.791 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:50.791 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:50.791 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:50.791 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:50.791 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:50.791 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:50.791 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:50.791 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:50.791 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:50.791 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:50.791 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:50.791 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:50.791 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:50.791 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:50.791 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:50.791 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:50.791 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:50.791 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:50.791 Cannot find device "nvmf_tgt_br" 00:14:50.791 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # true 00:14:50.791 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:50.791 Cannot find device "nvmf_tgt_br2" 00:14:50.791 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # true 00:14:50.791 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:50.791 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:50.791 Cannot find device "nvmf_tgt_br" 00:14:50.791 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # true 00:14:50.791 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:50.791 Cannot find device "nvmf_tgt_br2" 00:14:50.791 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # true 00:14:50.791 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:50.791 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:50.791 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:50.791 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:50.791 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:14:50.791 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:50.791 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:50.791 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:14:50.791 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:50.791 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:50.791 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:50.791 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:51.050 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:51.050 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:51.050 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:51.050 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:51.050 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:51.050 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:51.050 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:51.050 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:51.050 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:51.050 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:51.050 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:51.050 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:51.050 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:51.050 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:51.050 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:51.050 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:51.050 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:51.050 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:51.050 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:51.050 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:51.050 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:51.050 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.084 ms 00:14:51.050 00:14:51.050 --- 10.0.0.2 ping statistics --- 00:14:51.050 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:51.050 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:14:51.050 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:51.050 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:51.050 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:14:51.050 00:14:51.050 --- 10.0.0.3 ping statistics --- 00:14:51.050 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:51.050 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:14:51.050 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:51.050 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:51.050 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:14:51.050 00:14:51.050 --- 10.0.0.1 ping statistics --- 00:14:51.050 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:51.050 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:14:51.050 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:51.050 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@433 -- # return 0 00:14:51.050 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:51.050 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:51.050 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:51.050 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:51.050 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:51.050 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:51.050 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:51.050 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:14:51.050 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:51.050 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:51.050 21:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:14:51.050 21:36:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=75763 00:14:51.050 21:36:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 75763 00:14:51.050 21:36:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:14:51.050 21:36:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 75763 ']' 00:14:51.050 21:36:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:51.050 21:36:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:51.050 21:36:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:51.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:51.050 21:36:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:51.050 21:36:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:14:51.309 [2024-07-24 21:36:36.060827] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:14:51.309 [2024-07-24 21:36:36.060929] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:51.309 [2024-07-24 21:36:36.204349] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:51.567 [2024-07-24 21:36:36.350252] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:51.567 [2024-07-24 21:36:36.350330] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:51.567 [2024-07-24 21:36:36.350348] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:51.567 [2024-07-24 21:36:36.350360] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:51.567 [2024-07-24 21:36:36.350379] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:51.567 [2024-07-24 21:36:36.350549] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:51.567 [2024-07-24 21:36:36.350564] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:51.567 [2024-07-24 21:36:36.428889] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:52.134 21:36:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:52.134 21:36:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:14:52.134 21:36:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:52.134 21:36:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:52.134 21:36:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:14:52.134 21:36:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:52.134 21:36:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=75763 00:14:52.134 21:36:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:52.393 [2024-07-24 21:36:37.371752] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:52.652 21:36:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:14:52.911 Malloc0 00:14:52.911 21:36:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:14:53.185 21:36:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:53.447 21:36:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:53.447 [2024-07-24 21:36:38.435949] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:53.705 21:36:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:14:53.705 [2024-07-24 21:36:38.660060] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:14:53.705 21:36:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=75813 00:14:53.705 21:36:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:14:53.705 21:36:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:53.705 21:36:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 75813 /var/tmp/bdevperf.sock 00:14:53.705 21:36:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 75813 ']' 00:14:53.705 21:36:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:53.705 21:36:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:53.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:53.705 21:36:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:53.705 21:36:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:53.705 21:36:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:14:54.641 21:36:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:54.641 21:36:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:14:54.641 21:36:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:14:54.899 21:36:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:14:55.466 Nvme0n1 00:14:55.466 21:36:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:14:55.724 Nvme0n1 00:14:55.724 21:36:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:14:55.724 21:36:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:14:57.626 21:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:14:57.626 21:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:14:57.884 21:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:14:58.142 21:36:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:14:59.514 21:36:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:14:59.514 21:36:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:14:59.514 21:36:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:59.514 21:36:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:14:59.514 21:36:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:59.514 21:36:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:14:59.514 21:36:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:59.514 21:36:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:14:59.772 21:36:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:14:59.772 21:36:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:14:59.772 21:36:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:59.772 21:36:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:15:00.042 21:36:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:00.042 21:36:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:15:00.042 21:36:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:00.042 21:36:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:15:00.314 21:36:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:00.314 21:36:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:15:00.314 21:36:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:00.314 21:36:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:15:00.314 21:36:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:00.314 21:36:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:15:00.314 21:36:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:15:00.314 21:36:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:00.880 21:36:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:00.881 21:36:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:15:00.881 21:36:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:15:00.881 21:36:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:15:01.138 21:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:15:02.511 21:36:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:15:02.511 21:36:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:15:02.511 21:36:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:02.511 21:36:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:15:02.511 21:36:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:02.511 21:36:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:15:02.511 21:36:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:02.511 21:36:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:15:02.769 21:36:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:02.769 21:36:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:15:02.769 21:36:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:15:02.769 21:36:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:03.027 21:36:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:03.027 21:36:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:15:03.027 21:36:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:03.027 21:36:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:15:03.285 21:36:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:03.285 21:36:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:15:03.285 21:36:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:03.285 21:36:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:15:03.544 21:36:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:03.544 21:36:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:15:03.544 21:36:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:03.544 21:36:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:15:03.802 21:36:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:03.802 21:36:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:15:03.802 21:36:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:15:04.061 21:36:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:15:04.320 21:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:15:05.255 21:36:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:15:05.255 21:36:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:15:05.255 21:36:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:05.255 21:36:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:15:05.514 21:36:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:05.514 21:36:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:15:05.514 21:36:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:05.514 21:36:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:15:06.081 21:36:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:06.081 21:36:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:15:06.081 21:36:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:06.081 21:36:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:15:06.081 21:36:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:06.081 21:36:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:15:06.081 21:36:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:06.081 21:36:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:15:06.339 21:36:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:06.339 21:36:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:15:06.339 21:36:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:06.339 21:36:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:15:06.597 21:36:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:06.597 21:36:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:15:06.597 21:36:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:06.598 21:36:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:15:06.947 21:36:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:06.947 21:36:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:15:06.947 21:36:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:15:07.207 21:36:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:15:07.466 21:36:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:15:08.403 21:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:15:08.403 21:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:15:08.403 21:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:08.403 21:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:15:08.661 21:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:08.661 21:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:15:08.661 21:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:08.661 21:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:15:08.920 21:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:08.920 21:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:15:08.920 21:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:08.920 21:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:15:09.178 21:36:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:09.178 21:36:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:15:09.178 21:36:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:09.178 21:36:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:15:09.499 21:36:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:09.499 21:36:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:15:09.499 21:36:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:09.499 21:36:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:15:09.774 21:36:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:09.774 21:36:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:15:09.774 21:36:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:09.774 21:36:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:15:10.039 21:36:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:10.039 21:36:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:15:10.039 21:36:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:15:10.301 21:36:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:15:10.559 21:36:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:15:11.493 21:36:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:15:11.493 21:36:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:15:11.493 21:36:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:11.493 21:36:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:15:11.752 21:36:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:11.752 21:36:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:15:11.752 21:36:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:11.752 21:36:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:15:12.011 21:36:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:12.011 21:36:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:15:12.011 21:36:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:12.011 21:36:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:15:12.269 21:36:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:12.270 21:36:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:15:12.270 21:36:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:12.270 21:36:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:15:12.528 21:36:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:12.528 21:36:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:15:12.528 21:36:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:12.528 21:36:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:15:12.787 21:36:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:12.787 21:36:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:15:12.787 21:36:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:12.787 21:36:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:15:13.046 21:36:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:13.046 21:36:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:15:13.046 21:36:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:15:13.304 21:36:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:15:13.562 21:36:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:15:14.497 21:36:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:15:14.497 21:36:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:15:14.497 21:36:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:14.497 21:36:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:15:14.762 21:36:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:14.762 21:36:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:15:14.762 21:36:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:14.762 21:36:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:15:15.026 21:36:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:15.026 21:36:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:15:15.026 21:36:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:15.026 21:36:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:15:15.284 21:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:15.284 21:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:15:15.284 21:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:15.284 21:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:15:15.543 21:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:15.543 21:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:15:15.543 21:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:15.543 21:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:15:15.859 21:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:15.859 21:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:15:15.859 21:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:15.859 21:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:15:16.118 21:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:16.118 21:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:15:16.377 21:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:15:16.377 21:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:15:16.635 21:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:15:16.930 21:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:15:17.867 21:37:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:15:17.867 21:37:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:15:17.867 21:37:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:17.867 21:37:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:15:18.125 21:37:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:18.125 21:37:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:15:18.125 21:37:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:15:18.125 21:37:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:18.383 21:37:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:18.384 21:37:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:15:18.384 21:37:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:18.384 21:37:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:15:18.641 21:37:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:18.641 21:37:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:15:18.641 21:37:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:18.641 21:37:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:15:18.899 21:37:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:18.899 21:37:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:15:18.899 21:37:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:15:18.899 21:37:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:19.156 21:37:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:19.156 21:37:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:15:19.156 21:37:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:19.156 21:37:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:15:19.413 21:37:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:19.413 21:37:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:15:19.413 21:37:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:15:19.672 21:37:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:15:19.930 21:37:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:15:20.863 21:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:15:20.863 21:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:15:20.863 21:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:20.863 21:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:15:21.428 21:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:21.428 21:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:15:21.428 21:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:21.428 21:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:15:21.428 21:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:21.428 21:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:15:21.428 21:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:21.428 21:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:15:21.993 21:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:21.993 21:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:15:21.993 21:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:21.993 21:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:15:21.993 21:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:21.993 21:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:15:21.993 21:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:21.993 21:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:15:22.251 21:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:22.251 21:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:15:22.252 21:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:22.252 21:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:15:22.819 21:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:22.819 21:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:15:22.819 21:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:15:22.819 21:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:15:23.078 21:37:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:15:24.455 21:37:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:15:24.455 21:37:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:15:24.455 21:37:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:24.455 21:37:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:15:24.455 21:37:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:24.455 21:37:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:15:24.455 21:37:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:15:24.455 21:37:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:24.714 21:37:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:24.714 21:37:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:15:24.714 21:37:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:24.714 21:37:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:15:24.972 21:37:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:24.972 21:37:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:15:24.972 21:37:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:24.972 21:37:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:15:25.231 21:37:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:25.231 21:37:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:15:25.231 21:37:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:25.231 21:37:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:15:25.490 21:37:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:25.490 21:37:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:15:25.490 21:37:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:25.490 21:37:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:15:25.748 21:37:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:25.748 21:37:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:15:25.748 21:37:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:15:26.007 21:37:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:15:26.265 21:37:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:15:27.200 21:37:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:15:27.200 21:37:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:15:27.200 21:37:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:27.200 21:37:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:15:27.459 21:37:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:27.459 21:37:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:15:27.459 21:37:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:15:27.459 21:37:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:27.718 21:37:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:27.718 21:37:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:15:27.718 21:37:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:27.718 21:37:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:15:27.976 21:37:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:27.976 21:37:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:15:27.976 21:37:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:27.976 21:37:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:15:28.235 21:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:28.235 21:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:15:28.235 21:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:15:28.235 21:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:28.493 21:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:28.493 21:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:15:28.493 21:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:28.493 21:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:15:28.750 21:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:28.750 21:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 75813 00:15:28.750 21:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 75813 ']' 00:15:28.750 21:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 75813 00:15:28.750 21:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:15:28.750 21:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:28.750 21:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75813 00:15:28.750 killing process with pid 75813 00:15:28.750 21:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:15:28.750 21:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:15:28.750 21:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75813' 00:15:28.750 21:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 75813 00:15:28.750 21:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 75813 00:15:28.750 Connection closed with partial response: 00:15:28.750 00:15:28.750 00:15:29.012 21:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 75813 00:15:29.012 21:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:29.012 [2024-07-24 21:36:38.725713] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:15:29.012 [2024-07-24 21:36:38.725881] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75813 ] 00:15:29.012 [2024-07-24 21:36:38.863713] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:29.012 [2024-07-24 21:36:38.996300] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:29.012 [2024-07-24 21:36:39.051600] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:29.012 Running I/O for 90 seconds... 00:15:29.012 [2024-07-24 21:36:55.173214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:64 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.012 [2024-07-24 21:36:55.173316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:15:29.012 [2024-07-24 21:36:55.173377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:72 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.012 [2024-07-24 21:36:55.173398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:15:29.012 [2024-07-24 21:36:55.173419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:80 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.012 [2024-07-24 21:36:55.173435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:15:29.012 [2024-07-24 21:36:55.173453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:88 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.012 [2024-07-24 21:36:55.173481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:15:29.012 [2024-07-24 21:36:55.173500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:96 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.012 [2024-07-24 21:36:55.173516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:15:29.012 [2024-07-24 21:36:55.173535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.012 [2024-07-24 21:36:55.173550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:15:29.012 [2024-07-24 21:36:55.173569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.012 [2024-07-24 21:36:55.173584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:15:29.012 [2024-07-24 21:36:55.173603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.012 [2024-07-24 21:36:55.173631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:15:29.012 [2024-07-24 21:36:55.173669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.012 [2024-07-24 21:36:55.173687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:15:29.012 [2024-07-24 21:36:55.173707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.012 [2024-07-24 21:36:55.173722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:15:29.012 [2024-07-24 21:36:55.173742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.012 [2024-07-24 21:36:55.173784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:15:29.012 [2024-07-24 21:36:55.173805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.012 [2024-07-24 21:36:55.173820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:15:29.012 [2024-07-24 21:36:55.173838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.012 [2024-07-24 21:36:55.173854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:15:29.012 [2024-07-24 21:36:55.173873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.012 [2024-07-24 21:36:55.173887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:15:29.012 [2024-07-24 21:36:55.173906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.012 [2024-07-24 21:36:55.173920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:15:29.012 [2024-07-24 21:36:55.173938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.012 [2024-07-24 21:36:55.173954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:15:29.012 [2024-07-24 21:36:55.173972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:130624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.012 [2024-07-24 21:36:55.173988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:15:29.012 [2024-07-24 21:36:55.174007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:130632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.012 [2024-07-24 21:36:55.174022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:15:29.012 [2024-07-24 21:36:55.174040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:130640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.012 [2024-07-24 21:36:55.174055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:15:29.012 [2024-07-24 21:36:55.174074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:130648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.012 [2024-07-24 21:36:55.174088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:29.012 [2024-07-24 21:36:55.174107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:130656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.012 [2024-07-24 21:36:55.174122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:15:29.012 [2024-07-24 21:36:55.174141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:130664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.012 [2024-07-24 21:36:55.174156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:15:29.012 [2024-07-24 21:36:55.174174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:130672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.012 [2024-07-24 21:36:55.174198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:15:29.012 [2024-07-24 21:36:55.174218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:130680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.012 [2024-07-24 21:36:55.174233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:15:29.012 [2024-07-24 21:36:55.174252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:130688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.012 [2024-07-24 21:36:55.174266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:15:29.012 [2024-07-24 21:36:55.174285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:130696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.012 [2024-07-24 21:36:55.174300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:15:29.012 [2024-07-24 21:36:55.174319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:130704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.012 [2024-07-24 21:36:55.174333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:15:29.012 [2024-07-24 21:36:55.174352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:130712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.012 [2024-07-24 21:36:55.174366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:15:29.012 [2024-07-24 21:36:55.174385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:130720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.012 [2024-07-24 21:36:55.174401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:15:29.012 [2024-07-24 21:36:55.174420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:130728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.012 [2024-07-24 21:36:55.174435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:15:29.012 [2024-07-24 21:36:55.174454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:130736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.012 [2024-07-24 21:36:55.174479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:15:29.012 [2024-07-24 21:36:55.174510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:130744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.012 [2024-07-24 21:36:55.174525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:15:29.012 [2024-07-24 21:36:55.174544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:130752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.012 [2024-07-24 21:36:55.174558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:15:29.012 [2024-07-24 21:36:55.174578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:130760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.012 [2024-07-24 21:36:55.174593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:15:29.012 [2024-07-24 21:36:55.174612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:130768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.012 [2024-07-24 21:36:55.174638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:15:29.012 [2024-07-24 21:36:55.174671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:130776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.012 [2024-07-24 21:36:55.174687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:15:29.012 [2024-07-24 21:36:55.174706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:130784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.012 [2024-07-24 21:36:55.174721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:15:29.012 [2024-07-24 21:36:55.174741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:130792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.012 [2024-07-24 21:36:55.174755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:15:29.012 [2024-07-24 21:36:55.174774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:130800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.013 [2024-07-24 21:36:55.174789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:15:29.013 [2024-07-24 21:36:55.174809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:130808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.013 [2024-07-24 21:36:55.174824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:15:29.013 [2024-07-24 21:36:55.174859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.013 [2024-07-24 21:36:55.174878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:15:29.013 [2024-07-24 21:36:55.174898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.013 [2024-07-24 21:36:55.174951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:15:29.013 [2024-07-24 21:36:55.174971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.013 [2024-07-24 21:36:55.174987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:15:29.013 [2024-07-24 21:36:55.175006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.013 [2024-07-24 21:36:55.175021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:15:29.013 [2024-07-24 21:36:55.175041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.013 [2024-07-24 21:36:55.175057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:15:29.013 [2024-07-24 21:36:55.175076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.013 [2024-07-24 21:36:55.175092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:15:29.013 [2024-07-24 21:36:55.175112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.013 [2024-07-24 21:36:55.175127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:15:29.013 [2024-07-24 21:36:55.175157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.013 [2024-07-24 21:36:55.175174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:15:29.013 [2024-07-24 21:36:55.175193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:130816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.013 [2024-07-24 21:36:55.175210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:15:29.013 [2024-07-24 21:36:55.175230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:130824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.013 [2024-07-24 21:36:55.175273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:15:29.013 [2024-07-24 21:36:55.175292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:130832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.013 [2024-07-24 21:36:55.175306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:29.013 [2024-07-24 21:36:55.175325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:130840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.013 [2024-07-24 21:36:55.175340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:29.013 [2024-07-24 21:36:55.175359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:130848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.013 [2024-07-24 21:36:55.175373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:15:29.013 [2024-07-24 21:36:55.175392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:130856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.013 [2024-07-24 21:36:55.175407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:15:29.013 [2024-07-24 21:36:55.175427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:130864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.013 [2024-07-24 21:36:55.175441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:15:29.013 [2024-07-24 21:36:55.175460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:130872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.013 [2024-07-24 21:36:55.175475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:15:29.013 [2024-07-24 21:36:55.175494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.013 [2024-07-24 21:36:55.175509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:15:29.013 [2024-07-24 21:36:55.175528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.013 [2024-07-24 21:36:55.175542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:15:29.013 [2024-07-24 21:36:55.175561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.013 [2024-07-24 21:36:55.175576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:15:29.013 [2024-07-24 21:36:55.175595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.013 [2024-07-24 21:36:55.175616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:15:29.013 [2024-07-24 21:36:55.175637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.013 [2024-07-24 21:36:55.175665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:15:29.013 [2024-07-24 21:36:55.175688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.013 [2024-07-24 21:36:55.175711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:15:29.013 [2024-07-24 21:36:55.175730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.013 [2024-07-24 21:36:55.175745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:15:29.013 [2024-07-24 21:36:55.175764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.013 [2024-07-24 21:36:55.175794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:15:29.013 [2024-07-24 21:36:55.175817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.013 [2024-07-24 21:36:55.175834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:15:29.013 [2024-07-24 21:36:55.175854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.013 [2024-07-24 21:36:55.175869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:15:29.013 [2024-07-24 21:36:55.175888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.013 [2024-07-24 21:36:55.175903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:15:29.013 [2024-07-24 21:36:55.175922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.013 [2024-07-24 21:36:55.175937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:15:29.013 [2024-07-24 21:36:55.175956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.013 [2024-07-24 21:36:55.175971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:15:29.013 [2024-07-24 21:36:55.175990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.013 [2024-07-24 21:36:55.176005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:15:29.013 [2024-07-24 21:36:55.176023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.013 [2024-07-24 21:36:55.176038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:15:29.013 [2024-07-24 21:36:55.176057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.013 [2024-07-24 21:36:55.176081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:15:29.013 [2024-07-24 21:36:55.176101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:130880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.013 [2024-07-24 21:36:55.176128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:15:29.013 [2024-07-24 21:36:55.176147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:130888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.013 [2024-07-24 21:36:55.176162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:15:29.013 [2024-07-24 21:36:55.176180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:130896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.013 [2024-07-24 21:36:55.176195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:15:29.013 [2024-07-24 21:36:55.176214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:130904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.013 [2024-07-24 21:36:55.176228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:15:29.013 [2024-07-24 21:36:55.176247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:130912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.013 [2024-07-24 21:36:55.176272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:15:29.013 [2024-07-24 21:36:55.176292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:130920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.013 [2024-07-24 21:36:55.176306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:15:29.013 [2024-07-24 21:36:55.176325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:130928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.013 [2024-07-24 21:36:55.176340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:15:29.013 [2024-07-24 21:36:55.176359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:130936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.013 [2024-07-24 21:36:55.176373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:15:29.013 [2024-07-24 21:36:55.176393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:130944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.013 [2024-07-24 21:36:55.176407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:15:29.013 [2024-07-24 21:36:55.176426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:130952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.013 [2024-07-24 21:36:55.176441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:15:29.013 [2024-07-24 21:36:55.176459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:130960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.013 [2024-07-24 21:36:55.176474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:15:29.013 [2024-07-24 21:36:55.176492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:130968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.013 [2024-07-24 21:36:55.176507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:29.013 [2024-07-24 21:36:55.176533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:130976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.013 [2024-07-24 21:36:55.176549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:15:29.013 [2024-07-24 21:36:55.176568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:130984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.013 [2024-07-24 21:36:55.176583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:15:29.013 [2024-07-24 21:36:55.176601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:130992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.013 [2024-07-24 21:36:55.176616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:15:29.013 [2024-07-24 21:36:55.176650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:131000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.013 [2024-07-24 21:36:55.176676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:15:29.013 [2024-07-24 21:36:55.176710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.013 [2024-07-24 21:36:55.176729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:15:29.013 [2024-07-24 21:36:55.176751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.013 [2024-07-24 21:36:55.176766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:15:29.013 [2024-07-24 21:36:55.176785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.013 [2024-07-24 21:36:55.176800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:15:29.013 [2024-07-24 21:36:55.176819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.013 [2024-07-24 21:36:55.176834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:15:29.013 [2024-07-24 21:36:55.176853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.013 [2024-07-24 21:36:55.176874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:15:29.013 [2024-07-24 21:36:55.176893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.013 [2024-07-24 21:36:55.176908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:15:29.013 [2024-07-24 21:36:55.176927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.013 [2024-07-24 21:36:55.176942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:15:29.013 [2024-07-24 21:36:55.176961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.013 [2024-07-24 21:36:55.176976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:15:29.013 [2024-07-24 21:36:55.177005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.013 [2024-07-24 21:36:55.177021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:15:29.013 [2024-07-24 21:36:55.177040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.013 [2024-07-24 21:36:55.177055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:15:29.013 [2024-07-24 21:36:55.177074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.013 [2024-07-24 21:36:55.177088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:15:29.013 [2024-07-24 21:36:55.177107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.013 [2024-07-24 21:36:55.177126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:15:29.013 [2024-07-24 21:36:55.177145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.013 [2024-07-24 21:36:55.177160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:15:29.013 [2024-07-24 21:36:55.177179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.013 [2024-07-24 21:36:55.177193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:15:29.013 [2024-07-24 21:36:55.177212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.013 [2024-07-24 21:36:55.177227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:15:29.013 [2024-07-24 21:36:55.177245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.014 [2024-07-24 21:36:55.177260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:15:29.014 [2024-07-24 21:36:55.177278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:131008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.014 [2024-07-24 21:36:55.177293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:15:29.014 [2024-07-24 21:36:55.177312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:131016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.014 [2024-07-24 21:36:55.177327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:15:29.014 [2024-07-24 21:36:55.177357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:131024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.014 [2024-07-24 21:36:55.177372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:15:29.014 [2024-07-24 21:36:55.177391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:131032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.014 [2024-07-24 21:36:55.177406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:15:29.014 [2024-07-24 21:36:55.177424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:131040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.014 [2024-07-24 21:36:55.177451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:15:29.014 [2024-07-24 21:36:55.177472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:131048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.014 [2024-07-24 21:36:55.177495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:15:29.014 [2024-07-24 21:36:55.177515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:131056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.014 [2024-07-24 21:36:55.177530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:15:29.014 [2024-07-24 21:36:55.178180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:131064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.014 [2024-07-24 21:36:55.178213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:15:29.014 [2024-07-24 21:36:55.178244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.014 [2024-07-24 21:36:55.178261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:15:29.014 [2024-07-24 21:36:55.178287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.014 [2024-07-24 21:36:55.178303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:15:29.014 [2024-07-24 21:36:55.178329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.014 [2024-07-24 21:36:55.178344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:15:29.014 [2024-07-24 21:36:55.178370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.014 [2024-07-24 21:36:55.178385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:29.014 [2024-07-24 21:36:55.178411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.014 [2024-07-24 21:36:55.178426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:15:29.014 [2024-07-24 21:36:55.178452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.014 [2024-07-24 21:36:55.178467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:15:29.014 [2024-07-24 21:36:55.178494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.014 [2024-07-24 21:36:55.178519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:15:29.014 [2024-07-24 21:36:55.178558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.014 [2024-07-24 21:36:55.178577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:15:29.014 [2024-07-24 21:36:55.178604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:0 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.014 [2024-07-24 21:36:55.178654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:15:29.014 [2024-07-24 21:36:55.178694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:8 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.014 [2024-07-24 21:36:55.178710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:15:29.014 [2024-07-24 21:36:55.178742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.014 [2024-07-24 21:36:55.178759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:15:29.014 [2024-07-24 21:36:55.178785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:24 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.014 [2024-07-24 21:36:55.178800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:15:29.014 [2024-07-24 21:36:55.178826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:32 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.014 [2024-07-24 21:36:55.178847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:15:29.014 [2024-07-24 21:36:55.178874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:40 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.014 [2024-07-24 21:36:55.178889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:15:29.014 [2024-07-24 21:36:55.178925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:48 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.014 [2024-07-24 21:36:55.178943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:15:29.014 [2024-07-24 21:36:55.178970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:56 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.014 [2024-07-24 21:36:55.178986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:15:29.014 [2024-07-24 21:37:11.016476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:68536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.014 [2024-07-24 21:37:11.016561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:15:29.014 [2024-07-24 21:37:11.016601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:67952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.014 [2024-07-24 21:37:11.016646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:15:29.014 [2024-07-24 21:37:11.016683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:68152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.014 [2024-07-24 21:37:11.016700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:15:29.014 [2024-07-24 21:37:11.016722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:68560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.014 [2024-07-24 21:37:11.016739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:29.014 [2024-07-24 21:37:11.016764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:68576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.014 [2024-07-24 21:37:11.016781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:15:29.014 [2024-07-24 21:37:11.016833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:68592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.014 [2024-07-24 21:37:11.016851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:15:29.014 [2024-07-24 21:37:11.016873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:68608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.014 [2024-07-24 21:37:11.016890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:15:29.014 [2024-07-24 21:37:11.016912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:68184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.014 [2024-07-24 21:37:11.016929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:15:29.014 [2024-07-24 21:37:11.016951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:68224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.014 [2024-07-24 21:37:11.016967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:15:29.014 [2024-07-24 21:37:11.016989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:68256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.014 [2024-07-24 21:37:11.017006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:15:29.014 [2024-07-24 21:37:11.017028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:68288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.014 [2024-07-24 21:37:11.017045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:15:29.014 [2024-07-24 21:37:11.017067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:68624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.014 [2024-07-24 21:37:11.017083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:15:29.014 [2024-07-24 21:37:11.017105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:68296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.014 [2024-07-24 21:37:11.017122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:15:29.014 [2024-07-24 21:37:11.017143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:67976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.014 [2024-07-24 21:37:11.017159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:15:29.014 [2024-07-24 21:37:11.017181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:68008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.014 [2024-07-24 21:37:11.017198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:15:29.014 [2024-07-24 21:37:11.017220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:68640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.014 [2024-07-24 21:37:11.017236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:15:29.014 [2024-07-24 21:37:11.017257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:68656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.014 [2024-07-24 21:37:11.017274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:15:29.014 [2024-07-24 21:37:11.017310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:68672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.014 [2024-07-24 21:37:11.017329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:15:29.014 [2024-07-24 21:37:11.017352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:68688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.014 [2024-07-24 21:37:11.017369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:15:29.014 [2024-07-24 21:37:11.017391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:68064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.014 [2024-07-24 21:37:11.017409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:15:29.014 [2024-07-24 21:37:11.017431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:68096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.014 [2024-07-24 21:37:11.017448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:15:29.014 [2024-07-24 21:37:11.017470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:68120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.014 [2024-07-24 21:37:11.017487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:15:29.014 [2024-07-24 21:37:11.017509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:68336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.014 [2024-07-24 21:37:11.017526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:15:29.014 [2024-07-24 21:37:11.017549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:68368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.014 [2024-07-24 21:37:11.017566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:15:29.014 [2024-07-24 21:37:11.017588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:68696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.014 [2024-07-24 21:37:11.017604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:15:29.014 [2024-07-24 21:37:11.017639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:68712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.014 [2024-07-24 21:37:11.017660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:15:29.014 [2024-07-24 21:37:11.017682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:68728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.014 [2024-07-24 21:37:11.017699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:15:29.014 [2024-07-24 21:37:11.017721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:68744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.014 [2024-07-24 21:37:11.017739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:15:29.014 [2024-07-24 21:37:11.017761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:68160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.014 [2024-07-24 21:37:11.017778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:15:29.014 [2024-07-24 21:37:11.017800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:68192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.014 [2024-07-24 21:37:11.017826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:15:29.014 [2024-07-24 21:37:11.017850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:68216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.014 [2024-07-24 21:37:11.017867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:15:29.014 [2024-07-24 21:37:11.017890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:68248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.014 [2024-07-24 21:37:11.017907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:15:29.014 [2024-07-24 21:37:11.017929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:68280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.014 [2024-07-24 21:37:11.017946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:15:29.014 [2024-07-24 21:37:11.017969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:68752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.014 [2024-07-24 21:37:11.017986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:15:29.014 [2024-07-24 21:37:11.018009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:68768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.014 [2024-07-24 21:37:11.018026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:15:29.014 [2024-07-24 21:37:11.018048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:68784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.014 [2024-07-24 21:37:11.018066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:29.014 [2024-07-24 21:37:11.018113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:68392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.014 [2024-07-24 21:37:11.018135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:15:29.014 [2024-07-24 21:37:11.018159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:68800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.014 [2024-07-24 21:37:11.018176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:15:29.015 [2024-07-24 21:37:11.018198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:68816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.015 [2024-07-24 21:37:11.018215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:15:29.015 [2024-07-24 21:37:11.018237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:68832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.015 [2024-07-24 21:37:11.018254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:15:29.015 [2024-07-24 21:37:11.018276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:68848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.015 [2024-07-24 21:37:11.018294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:15:29.015 [2024-07-24 21:37:11.018316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:68864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.015 [2024-07-24 21:37:11.018342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:15:29.015 [2024-07-24 21:37:11.018366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:68880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.015 [2024-07-24 21:37:11.018384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:15:29.015 [2024-07-24 21:37:11.018405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:68896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.015 [2024-07-24 21:37:11.018423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:15:29.015 [2024-07-24 21:37:11.018445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:68448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.015 [2024-07-24 21:37:11.018462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:15:29.015 [2024-07-24 21:37:11.018485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:68480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.015 [2024-07-24 21:37:11.018502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:15:29.015 [2024-07-24 21:37:11.018524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:68320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.015 [2024-07-24 21:37:11.018541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:15:29.015 [2024-07-24 21:37:11.018563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:68344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.015 [2024-07-24 21:37:11.018580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:15:29.015 [2024-07-24 21:37:11.018602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:68376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.015 [2024-07-24 21:37:11.018632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:15:29.015 [2024-07-24 21:37:11.018658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:68400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.015 [2024-07-24 21:37:11.018675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:15:29.015 [2024-07-24 21:37:11.018698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:68912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.015 [2024-07-24 21:37:11.018715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:15:29.015 [2024-07-24 21:37:11.018737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:68928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.015 [2024-07-24 21:37:11.018754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:15:29.015 [2024-07-24 21:37:11.018776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:68416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.015 [2024-07-24 21:37:11.018794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:15:29.015 [2024-07-24 21:37:11.018816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:68952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.015 [2024-07-24 21:37:11.018833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:15:29.015 [2024-07-24 21:37:11.020147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:68512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.015 [2024-07-24 21:37:11.020178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:15:29.015 [2024-07-24 21:37:11.020206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:68968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.015 [2024-07-24 21:37:11.020225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:15:29.015 [2024-07-24 21:37:11.020248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:68984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.015 [2024-07-24 21:37:11.020266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:15:29.015 [2024-07-24 21:37:11.020289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:69000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.015 [2024-07-24 21:37:11.020306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:15:29.015 [2024-07-24 21:37:11.020328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:68544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.015 [2024-07-24 21:37:11.020345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:15:29.015 [2024-07-24 21:37:11.020368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:68568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.015 [2024-07-24 21:37:11.020385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:15:29.015 [2024-07-24 21:37:11.020407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:68600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.015 [2024-07-24 21:37:11.020425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:15:29.015 [2024-07-24 21:37:11.020447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:68632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.015 [2024-07-24 21:37:11.020465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:15:29.015 [2024-07-24 21:37:11.020487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:68440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.015 [2024-07-24 21:37:11.020504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:15:29.015 [2024-07-24 21:37:11.020526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:68472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.015 [2024-07-24 21:37:11.020544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:15:29.015 [2024-07-24 21:37:11.020566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:68504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.015 [2024-07-24 21:37:11.020583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:15:29.015 [2024-07-24 21:37:11.020606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:69016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.015 [2024-07-24 21:37:11.020644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:15:29.015 [2024-07-24 21:37:11.020692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:69032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.015 [2024-07-24 21:37:11.020711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:15:29.015 [2024-07-24 21:37:11.020733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:69048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.015 [2024-07-24 21:37:11.020751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:29.015 [2024-07-24 21:37:11.020773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:67952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.015 [2024-07-24 21:37:11.020790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:15:29.015 [2024-07-24 21:37:11.020812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:68560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.015 [2024-07-24 21:37:11.020829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:15:29.015 [2024-07-24 21:37:11.020873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:68592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.015 [2024-07-24 21:37:11.020894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:15:29.015 [2024-07-24 21:37:11.020918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:68184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.015 [2024-07-24 21:37:11.020935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:15:29.015 [2024-07-24 21:37:11.020957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:68256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.015 [2024-07-24 21:37:11.020974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:15:29.015 [2024-07-24 21:37:11.020996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:68624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.015 [2024-07-24 21:37:11.021013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:15:29.015 [2024-07-24 21:37:11.021035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:67976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.015 [2024-07-24 21:37:11.021053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:15:29.015 [2024-07-24 21:37:11.021075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:68640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.015 [2024-07-24 21:37:11.021092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:15:29.015 [2024-07-24 21:37:11.021114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:68672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.015 [2024-07-24 21:37:11.021132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:15:29.015 [2024-07-24 21:37:11.021154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:68064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.015 [2024-07-24 21:37:11.021171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:15:29.015 [2024-07-24 21:37:11.021194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:68120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.015 [2024-07-24 21:37:11.021221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:15:29.015 [2024-07-24 21:37:11.021244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:68368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.015 [2024-07-24 21:37:11.021261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:15:29.015 [2024-07-24 21:37:11.021284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:68712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.015 [2024-07-24 21:37:11.021302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:15:29.015 [2024-07-24 21:37:11.021325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:68744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.015 [2024-07-24 21:37:11.021342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:15:29.015 [2024-07-24 21:37:11.021363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:68192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.015 [2024-07-24 21:37:11.021380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:15:29.015 [2024-07-24 21:37:11.021403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:68248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.015 [2024-07-24 21:37:11.021420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:15:29.015 [2024-07-24 21:37:11.021441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:68752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.015 [2024-07-24 21:37:11.021458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:15:29.015 [2024-07-24 21:37:11.021481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:68784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.015 [2024-07-24 21:37:11.021498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:15:29.015 [2024-07-24 21:37:11.021524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:68800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.015 [2024-07-24 21:37:11.021543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:15:29.015 [2024-07-24 21:37:11.021565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:68832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.015 [2024-07-24 21:37:11.021583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:15:29.015 [2024-07-24 21:37:11.021605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:68864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.015 [2024-07-24 21:37:11.021642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:15:29.015 [2024-07-24 21:37:11.021668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:68896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.015 [2024-07-24 21:37:11.021685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:15:29.015 [2024-07-24 21:37:11.021707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:68480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.015 [2024-07-24 21:37:11.021734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:15:29.015 [2024-07-24 21:37:11.021758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:68344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.015 [2024-07-24 21:37:11.021776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:15:29.015 [2024-07-24 21:37:11.021798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:68400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.015 [2024-07-24 21:37:11.021816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:15:29.015 [2024-07-24 21:37:11.021838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:68928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.015 [2024-07-24 21:37:11.021855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:15:29.015 [2024-07-24 21:37:11.021878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:68952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.015 [2024-07-24 21:37:11.021895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:15:29.016 [2024-07-24 21:37:11.023249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:69056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.016 [2024-07-24 21:37:11.023287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:15:29.016 [2024-07-24 21:37:11.023315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:69072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.016 [2024-07-24 21:37:11.023334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:15:29.016 [2024-07-24 21:37:11.023357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:69088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.016 [2024-07-24 21:37:11.023374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:15:29.016 [2024-07-24 21:37:11.023396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:69104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.016 [2024-07-24 21:37:11.023413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:29.016 [2024-07-24 21:37:11.023435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:68664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.016 [2024-07-24 21:37:11.023452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:29.016 [2024-07-24 21:37:11.023473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:68704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.016 [2024-07-24 21:37:11.023490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:15:29.016 [2024-07-24 21:37:11.023512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:68736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.016 [2024-07-24 21:37:11.023529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:15:29.016 [2024-07-24 21:37:11.023551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:68968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.016 [2024-07-24 21:37:11.023568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:15:29.016 [2024-07-24 21:37:11.023613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:69000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.016 [2024-07-24 21:37:11.023654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:15:29.016 [2024-07-24 21:37:11.023679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:68568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.016 [2024-07-24 21:37:11.023697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:15:29.016 [2024-07-24 21:37:11.023719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:68632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.016 [2024-07-24 21:37:11.023736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:15:29.016 [2024-07-24 21:37:11.023769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:68472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.016 [2024-07-24 21:37:11.023787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:15:29.016 [2024-07-24 21:37:11.023810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:69016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.016 [2024-07-24 21:37:11.023827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:15:29.016 [2024-07-24 21:37:11.023871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:69048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.016 [2024-07-24 21:37:11.023893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:15:29.016 [2024-07-24 21:37:11.023917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:68560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.016 [2024-07-24 21:37:11.023935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:15:29.016 [2024-07-24 21:37:11.023957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:68184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.016 [2024-07-24 21:37:11.023974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:15:29.016 [2024-07-24 21:37:11.023996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:68624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.016 [2024-07-24 21:37:11.024013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:15:29.016 [2024-07-24 21:37:11.024035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:68640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.016 [2024-07-24 21:37:11.024052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:15:29.016 [2024-07-24 21:37:11.024074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:68064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.016 [2024-07-24 21:37:11.024090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:15:29.016 [2024-07-24 21:37:11.024112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:68368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.016 [2024-07-24 21:37:11.024130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:15:29.016 [2024-07-24 21:37:11.024163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:68744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.016 [2024-07-24 21:37:11.024181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:15:29.016 [2024-07-24 21:37:11.024203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:68248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.016 [2024-07-24 21:37:11.024219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:15:29.016 [2024-07-24 21:37:11.024242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:68784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.016 [2024-07-24 21:37:11.024259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:15:29.016 [2024-07-24 21:37:11.024280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:68832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.016 [2024-07-24 21:37:11.024297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:15:29.016 [2024-07-24 21:37:11.024319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:68896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.016 [2024-07-24 21:37:11.024336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:15:29.016 [2024-07-24 21:37:11.024358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:68344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.016 [2024-07-24 21:37:11.024374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:15:29.016 [2024-07-24 21:37:11.024396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:68928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.016 [2024-07-24 21:37:11.024413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:15:29.016 [2024-07-24 21:37:11.025402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:68760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.016 [2024-07-24 21:37:11.025431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:15:29.016 [2024-07-24 21:37:11.025458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:68792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.016 [2024-07-24 21:37:11.025477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:15:29.016 [2024-07-24 21:37:11.025499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:69128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.016 [2024-07-24 21:37:11.025517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:15:29.016 [2024-07-24 21:37:11.025540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:69144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.016 [2024-07-24 21:37:11.025557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:15:29.016 [2024-07-24 21:37:11.025579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:69160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.016 [2024-07-24 21:37:11.025596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:15:29.016 [2024-07-24 21:37:11.025646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:69176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.016 [2024-07-24 21:37:11.025667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:15:29.016 [2024-07-24 21:37:11.025690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:69192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.016 [2024-07-24 21:37:11.025707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:15:29.016 [2024-07-24 21:37:11.025729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:69208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.016 [2024-07-24 21:37:11.025746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:15:29.016 [2024-07-24 21:37:11.025768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:68824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.016 [2024-07-24 21:37:11.025785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:15:29.016 [2024-07-24 21:37:11.025807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:68856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.016 [2024-07-24 21:37:11.025824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:29.016 [2024-07-24 21:37:11.025847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:68888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.016 [2024-07-24 21:37:11.025864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:15:29.016 [2024-07-24 21:37:11.025886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:69072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.016 [2024-07-24 21:37:11.025903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:15:29.016 [2024-07-24 21:37:11.025945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:69104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.016 [2024-07-24 21:37:11.025967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:15:29.016 [2024-07-24 21:37:11.025990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:68704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.016 [2024-07-24 21:37:11.026008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:15:29.016 [2024-07-24 21:37:11.026030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:68968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.016 [2024-07-24 21:37:11.026047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:15:29.016 [2024-07-24 21:37:11.026069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:68568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.016 [2024-07-24 21:37:11.026086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:15:29.016 [2024-07-24 21:37:11.026109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:68472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.016 [2024-07-24 21:37:11.026126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:15:29.016 [2024-07-24 21:37:11.026148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:69048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.016 [2024-07-24 21:37:11.026176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:15:29.016 [2024-07-24 21:37:11.026199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:68184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.016 [2024-07-24 21:37:11.026234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:15:29.016 [2024-07-24 21:37:11.026257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:68640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.016 [2024-07-24 21:37:11.026274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:15:29.016 [2024-07-24 21:37:11.026296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:68368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.016 [2024-07-24 21:37:11.026313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:15:29.016 [2024-07-24 21:37:11.026336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:68248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.016 [2024-07-24 21:37:11.026353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:15:29.016 [2024-07-24 21:37:11.026375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:68832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.016 [2024-07-24 21:37:11.026392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:15:29.016 [2024-07-24 21:37:11.026415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:68344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.016 [2024-07-24 21:37:11.026432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:15:29.016 [2024-07-24 21:37:11.027520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:68904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.016 [2024-07-24 21:37:11.027548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:15:29.016 [2024-07-24 21:37:11.027577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:69224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.016 [2024-07-24 21:37:11.027595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:15:29.016 [2024-07-24 21:37:11.027618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:69240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.016 [2024-07-24 21:37:11.027665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:15:29.016 [2024-07-24 21:37:11.027689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:69256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.016 [2024-07-24 21:37:11.027706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:15:29.016 [2024-07-24 21:37:11.027728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:68920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.016 [2024-07-24 21:37:11.027745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:15:29.016 [2024-07-24 21:37:11.027768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:68944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.016 [2024-07-24 21:37:11.027796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:15:29.016 [2024-07-24 21:37:11.027828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:68976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.016 [2024-07-24 21:37:11.027845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:15:29.016 [2024-07-24 21:37:11.027867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:69008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.016 [2024-07-24 21:37:11.027884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:15:29.016 [2024-07-24 21:37:11.027906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:68792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.016 [2024-07-24 21:37:11.027923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:15:29.016 [2024-07-24 21:37:11.027945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:69144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.016 [2024-07-24 21:37:11.027962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:15:29.016 [2024-07-24 21:37:11.027984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:69176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.016 [2024-07-24 21:37:11.028001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:15:29.016 [2024-07-24 21:37:11.028023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:69208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.016 [2024-07-24 21:37:11.028040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:15:29.016 [2024-07-24 21:37:11.028062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:68856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.016 [2024-07-24 21:37:11.028079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:15:29.016 [2024-07-24 21:37:11.028101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:69072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.017 [2024-07-24 21:37:11.028118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:15:29.017 [2024-07-24 21:37:11.028140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:68704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.017 [2024-07-24 21:37:11.028157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:15:29.017 [2024-07-24 21:37:11.028179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:68568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.017 [2024-07-24 21:37:11.028196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:15:29.017 [2024-07-24 21:37:11.028223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:69048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.017 [2024-07-24 21:37:11.028242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:15:29.017 [2024-07-24 21:37:11.028264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:68640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.017 [2024-07-24 21:37:11.028281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:29.017 [2024-07-24 21:37:11.028315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:68248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.017 [2024-07-24 21:37:11.028332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:15:29.017 [2024-07-24 21:37:11.028355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:68344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.017 [2024-07-24 21:37:11.028372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:15:29.017 [2024-07-24 21:37:11.028393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:69040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.017 [2024-07-24 21:37:11.028410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:15:29.017 [2024-07-24 21:37:11.028432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:68576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.017 [2024-07-24 21:37:11.028449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:15:29.017 [2024-07-24 21:37:11.028471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:69280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.017 [2024-07-24 21:37:11.028487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:15:29.017 [2024-07-24 21:37:11.028509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:69296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.017 [2024-07-24 21:37:11.028526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:15:29.017 [2024-07-24 21:37:11.028548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:69312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.017 [2024-07-24 21:37:11.028565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:15:29.017 [2024-07-24 21:37:11.028586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:69328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.017 [2024-07-24 21:37:11.028603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:15:29.017 [2024-07-24 21:37:11.028638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:68608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.017 [2024-07-24 21:37:11.028658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:15:29.017 [2024-07-24 21:37:11.030244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:68688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.017 [2024-07-24 21:37:11.030274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:15:29.017 [2024-07-24 21:37:11.030303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:68728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.017 [2024-07-24 21:37:11.030322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:15:29.017 [2024-07-24 21:37:11.030344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:68816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.017 [2024-07-24 21:37:11.030361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:15:29.017 [2024-07-24 21:37:11.030396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:69344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.017 [2024-07-24 21:37:11.030415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:15:29.017 [2024-07-24 21:37:11.030438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:69360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.017 [2024-07-24 21:37:11.030455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:15:29.017 [2024-07-24 21:37:11.030476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:69376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.017 [2024-07-24 21:37:11.030493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:15:29.017 [2024-07-24 21:37:11.030515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:69392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.017 [2024-07-24 21:37:11.030532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:15:29.017 [2024-07-24 21:37:11.030553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:69408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.017 [2024-07-24 21:37:11.030570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:15:29.017 [2024-07-24 21:37:11.030591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:68880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.017 [2024-07-24 21:37:11.030608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:15:29.017 [2024-07-24 21:37:11.030647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:69224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.017 [2024-07-24 21:37:11.030666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:15:29.017 [2024-07-24 21:37:11.030688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:69256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.017 [2024-07-24 21:37:11.030705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:15:29.017 [2024-07-24 21:37:11.030727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:68944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.017 [2024-07-24 21:37:11.030743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:15:29.017 [2024-07-24 21:37:11.030766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:69008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.017 [2024-07-24 21:37:11.030783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:15:29.017 [2024-07-24 21:37:11.032394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:69144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.017 [2024-07-24 21:37:11.032423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:15:29.017 [2024-07-24 21:37:11.032451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:69208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.017 [2024-07-24 21:37:11.032470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:15:29.017 [2024-07-24 21:37:11.032492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:69072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.017 [2024-07-24 21:37:11.032527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:15:29.017 [2024-07-24 21:37:11.032552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:68568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.017 [2024-07-24 21:37:11.032569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:15:29.017 [2024-07-24 21:37:11.032592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:68640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.017 [2024-07-24 21:37:11.032609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:15:29.017 [2024-07-24 21:37:11.032656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:68344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.017 [2024-07-24 21:37:11.032676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:15:29.017 [2024-07-24 21:37:11.032699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:68576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.017 [2024-07-24 21:37:11.032716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:15:29.017 [2024-07-24 21:37:11.032738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:69296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.017 [2024-07-24 21:37:11.032755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:15:29.017 [2024-07-24 21:37:11.032777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:69328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.017 [2024-07-24 21:37:11.032794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:15:29.017 [2024-07-24 21:37:11.032816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:68912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.017 [2024-07-24 21:37:11.032833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:29.017 [2024-07-24 21:37:11.032855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:69432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.017 [2024-07-24 21:37:11.032872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:15:29.017 [2024-07-24 21:37:11.032893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:69448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.017 [2024-07-24 21:37:11.032910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:15:29.017 [2024-07-24 21:37:11.032932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:69080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.017 [2024-07-24 21:37:11.032949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:15:29.017 [2024-07-24 21:37:11.032971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:69112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.017 [2024-07-24 21:37:11.032987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:15:29.017 [2024-07-24 21:37:11.033009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:68592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.017 [2024-07-24 21:37:11.033035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:15:29.017 [2024-07-24 21:37:11.033059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:69464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.017 [2024-07-24 21:37:11.033077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:15:29.017 [2024-07-24 21:37:11.033099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:69480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.017 [2024-07-24 21:37:11.033116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:15:29.017 [2024-07-24 21:37:11.033138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:69496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.017 [2024-07-24 21:37:11.033155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:15:29.017 [2024-07-24 21:37:11.033177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:69512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.017 [2024-07-24 21:37:11.033194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:15:29.017 [2024-07-24 21:37:11.033216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:69528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.017 [2024-07-24 21:37:11.033232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:15:29.017 [2024-07-24 21:37:11.033254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:68672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.017 [2024-07-24 21:37:11.033271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:15:29.017 [2024-07-24 21:37:11.033293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:68688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.017 [2024-07-24 21:37:11.033310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:15:29.017 [2024-07-24 21:37:11.033332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:68816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.017 [2024-07-24 21:37:11.033349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:15:29.017 [2024-07-24 21:37:11.033370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:69360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.017 [2024-07-24 21:37:11.033387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:15:29.017 [2024-07-24 21:37:11.033409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:69392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.017 [2024-07-24 21:37:11.033426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:15:29.017 [2024-07-24 21:37:11.033447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:68880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.017 [2024-07-24 21:37:11.033464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:15:29.017 [2024-07-24 21:37:11.033486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:69256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.017 [2024-07-24 21:37:11.033502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:15:29.017 [2024-07-24 21:37:11.033533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:69008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.017 [2024-07-24 21:37:11.033551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:15:29.017 [2024-07-24 21:37:11.034640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:68800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.017 [2024-07-24 21:37:11.034669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:15:29.017 [2024-07-24 21:37:11.034697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:68952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.017 [2024-07-24 21:37:11.034716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:15:29.017 [2024-07-24 21:37:11.034740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:69552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.017 [2024-07-24 21:37:11.034757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:15:29.017 [2024-07-24 21:37:11.034779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:69568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.017 [2024-07-24 21:37:11.034796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:15:29.018 [2024-07-24 21:37:11.034819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:69136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.018 [2024-07-24 21:37:11.034835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:15:29.018 [2024-07-24 21:37:11.034857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:69168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.018 [2024-07-24 21:37:11.034873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:15:29.018 [2024-07-24 21:37:11.034895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:69200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.018 [2024-07-24 21:37:11.034912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:15:29.018 [2024-07-24 21:37:11.034946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:69576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.018 [2024-07-24 21:37:11.034965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:15:29.018 [2024-07-24 21:37:11.034987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:69592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.018 [2024-07-24 21:37:11.035004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:15:29.018 [2024-07-24 21:37:11.035025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:69088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.018 [2024-07-24 21:37:11.035042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:15:29.018 [2024-07-24 21:37:11.035064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:69208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.018 [2024-07-24 21:37:11.035081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:15:29.018 [2024-07-24 21:37:11.035115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:68568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.018 [2024-07-24 21:37:11.035134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:15:29.018 [2024-07-24 21:37:11.035156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:68344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.018 [2024-07-24 21:37:11.035172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:29.018 [2024-07-24 21:37:11.035194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:69296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.018 [2024-07-24 21:37:11.035211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:29.018 [2024-07-24 21:37:11.035233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:68912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.018 [2024-07-24 21:37:11.035249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:15:29.018 [2024-07-24 21:37:11.035271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:69448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.018 [2024-07-24 21:37:11.035288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:15:29.018 [2024-07-24 21:37:11.035310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:69112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.018 [2024-07-24 21:37:11.035327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:15:29.018 [2024-07-24 21:37:11.035370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:69464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.018 [2024-07-24 21:37:11.035391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:15:29.018 [2024-07-24 21:37:11.035414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:69496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.018 [2024-07-24 21:37:11.035432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:15:29.018 [2024-07-24 21:37:11.035453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:69528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.018 [2024-07-24 21:37:11.035470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:15:29.018 [2024-07-24 21:37:11.035492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:68688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.018 [2024-07-24 21:37:11.035509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:15:29.018 [2024-07-24 21:37:11.035531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:69360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.018 [2024-07-24 21:37:11.035547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:15:29.018 [2024-07-24 21:37:11.035569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:68880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.018 [2024-07-24 21:37:11.035586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:15:29.018 [2024-07-24 21:37:11.035607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:69008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.018 [2024-07-24 21:37:11.035651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:15:29.018 [2024-07-24 21:37:11.035677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:69016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.018 [2024-07-24 21:37:11.035694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:15:29.018 [2024-07-24 21:37:11.035717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:68624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.018 [2024-07-24 21:37:11.035733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:15:29.018 [2024-07-24 21:37:11.035756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:68784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.018 [2024-07-24 21:37:11.035772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:15:29.018 [2024-07-24 21:37:11.035794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:69608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.018 [2024-07-24 21:37:11.035810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:15:29.018 [2024-07-24 21:37:11.035832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:69624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.018 [2024-07-24 21:37:11.035849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:15:29.018 [2024-07-24 21:37:11.035871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:68928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.018 [2024-07-24 21:37:11.035887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:15:29.018 [2024-07-24 21:37:11.035909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:69232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.018 [2024-07-24 21:37:11.035926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:15:29.018 [2024-07-24 21:37:11.035948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:69264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.018 [2024-07-24 21:37:11.035964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:15:29.018 [2024-07-24 21:37:11.035987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:69160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.018 [2024-07-24 21:37:11.036003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:15:29.018 [2024-07-24 21:37:11.037949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:69648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.018 [2024-07-24 21:37:11.037979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:15:29.018 [2024-07-24 21:37:11.038024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:69664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.018 [2024-07-24 21:37:11.038046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:15:29.018 [2024-07-24 21:37:11.038070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:69680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.018 [2024-07-24 21:37:11.038101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:15:29.018 [2024-07-24 21:37:11.038125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:68968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.018 [2024-07-24 21:37:11.038142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:15:29.018 [2024-07-24 21:37:11.038164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:69272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.018 [2024-07-24 21:37:11.038181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:15:29.018 [2024-07-24 21:37:11.038203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:69304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.018 [2024-07-24 21:37:11.038220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:15:29.018 [2024-07-24 21:37:11.038242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:69336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.018 [2024-07-24 21:37:11.038259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:15:29.018 [2024-07-24 21:37:11.038280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:69696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.018 [2024-07-24 21:37:11.038297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:15:29.018 [2024-07-24 21:37:11.038320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:69712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.018 [2024-07-24 21:37:11.038337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:15:29.018 Received shutdown signal, test time was about 32.968240 seconds 00:15:29.018 00:15:29.018 Latency(us) 00:15:29.018 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:29.018 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:29.018 Verification LBA range: start 0x0 length 0x4000 00:15:29.018 Nvme0n1 : 32.97 8395.05 32.79 0.00 0.00 15216.69 215.97 4026531.84 00:15:29.018 =================================================================================================================== 00:15:29.018 Total : 8395.05 32.79 0.00 0.00 15216.69 215.97 4026531.84 00:15:29.018 21:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:29.276 21:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:15:29.276 21:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:29.276 21:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:15:29.276 21:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:29.276 21:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:15:29.276 21:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:29.276 21:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:15:29.276 21:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:29.276 21:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:29.276 rmmod nvme_tcp 00:15:29.277 rmmod nvme_fabrics 00:15:29.277 rmmod nvme_keyring 00:15:29.277 21:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:29.277 21:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:15:29.277 21:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:15:29.277 21:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 75763 ']' 00:15:29.277 21:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 75763 00:15:29.277 21:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 75763 ']' 00:15:29.277 21:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 75763 00:15:29.277 21:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:15:29.277 21:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:29.277 21:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75763 00:15:29.277 21:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:29.277 21:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:29.277 21:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75763' 00:15:29.277 killing process with pid 75763 00:15:29.277 21:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 75763 00:15:29.277 21:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 75763 00:15:29.844 21:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:29.844 21:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:29.844 21:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:29.844 21:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:29.844 21:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:29.844 21:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:29.844 21:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:29.844 21:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:29.844 21:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:29.844 ************************************ 00:15:29.844 END TEST nvmf_host_multipath_status 00:15:29.844 ************************************ 00:15:29.844 00:15:29.844 real 0m39.079s 00:15:29.844 user 2m4.983s 00:15:29.844 sys 0m12.177s 00:15:29.844 21:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:29.844 21:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:15:29.844 21:37:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:15:29.844 21:37:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:29.844 21:37:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:29.844 21:37:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:15:29.844 ************************************ 00:15:29.844 START TEST nvmf_discovery_remove_ifc 00:15:29.844 ************************************ 00:15:29.844 21:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:15:29.844 * Looking for test storage... 00:15:29.844 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:29.844 21:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:29.844 21:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:15:29.844 21:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:29.844 21:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:29.844 21:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:29.844 21:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:29.844 21:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:29.844 21:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:29.844 21:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:29.844 21:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:29.844 21:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:29.844 21:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:29.844 21:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:15:29.844 21:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:15:29.844 21:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:29.844 21:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:29.844 21:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:29.844 21:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:29.844 21:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:29.844 21:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:29.844 21:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:29.844 21:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:29.844 21:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:29.844 21:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:29.844 21:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:29.844 21:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:15:29.844 21:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:29.844 21:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:15:29.844 21:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:29.844 21:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:29.844 21:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:29.844 21:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:29.844 21:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:29.844 21:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:29.844 21:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:29.844 21:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:29.844 21:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:15:29.844 21:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:15:29.844 21:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:15:29.845 21:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:15:29.845 21:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:15:29.845 21:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:15:29.845 21:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:15:29.845 21:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:29.845 21:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:29.845 21:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:29.845 21:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:29.845 21:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:29.845 21:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:29.845 21:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:29.845 21:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:29.845 21:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:29.845 21:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:29.845 21:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:29.845 21:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:29.845 21:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:29.845 21:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:29.845 21:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:29.845 21:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:29.845 21:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:29.845 21:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:29.845 21:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:29.845 21:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:29.845 21:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:29.845 21:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:29.845 21:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:29.845 21:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:29.845 21:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:29.845 21:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:29.845 21:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:29.845 21:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:29.845 Cannot find device "nvmf_tgt_br" 00:15:29.845 21:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # true 00:15:29.845 21:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:29.845 Cannot find device "nvmf_tgt_br2" 00:15:29.845 21:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # true 00:15:29.845 21:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:29.845 21:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:29.845 Cannot find device "nvmf_tgt_br" 00:15:29.845 21:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # true 00:15:29.845 21:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:29.845 Cannot find device "nvmf_tgt_br2" 00:15:29.845 21:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # true 00:15:29.845 21:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:30.104 21:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:30.104 21:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:30.104 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:30.104 21:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:15:30.104 21:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:30.104 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:30.104 21:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:15:30.104 21:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:30.104 21:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:30.104 21:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:30.104 21:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:30.104 21:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:30.104 21:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:30.104 21:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:30.104 21:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:30.104 21:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:30.104 21:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:30.104 21:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:30.104 21:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:30.104 21:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:30.104 21:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:30.104 21:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:30.104 21:37:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:30.104 21:37:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:30.104 21:37:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:30.104 21:37:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:30.104 21:37:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:30.104 21:37:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:30.104 21:37:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:30.104 21:37:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:30.104 21:37:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:30.104 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:30.104 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.099 ms 00:15:30.104 00:15:30.104 --- 10.0.0.2 ping statistics --- 00:15:30.104 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:30.104 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:15:30.104 21:37:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:30.104 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:30.104 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:15:30.104 00:15:30.104 --- 10.0.0.3 ping statistics --- 00:15:30.104 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:30.104 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:15:30.104 21:37:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:30.104 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:30.104 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:15:30.104 00:15:30.104 --- 10.0.0.1 ping statistics --- 00:15:30.104 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:30.104 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:15:30.104 21:37:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:30.104 21:37:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@433 -- # return 0 00:15:30.104 21:37:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:30.104 21:37:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:30.104 21:37:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:30.104 21:37:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:30.104 21:37:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:30.104 21:37:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:30.104 21:37:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:30.363 21:37:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:15:30.363 21:37:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:30.363 21:37:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:30.363 21:37:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:30.363 21:37:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=76597 00:15:30.363 21:37:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 76597 00:15:30.363 21:37:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 76597 ']' 00:15:30.363 21:37:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:30.363 21:37:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:30.363 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:30.363 21:37:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:30.363 21:37:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:30.363 21:37:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:30.363 21:37:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:30.363 [2024-07-24 21:37:15.185181] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:15:30.363 [2024-07-24 21:37:15.185288] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:30.363 [2024-07-24 21:37:15.320045] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:30.623 [2024-07-24 21:37:15.465783] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:30.623 [2024-07-24 21:37:15.465858] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:30.623 [2024-07-24 21:37:15.465869] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:30.623 [2024-07-24 21:37:15.465877] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:30.623 [2024-07-24 21:37:15.465884] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:30.623 [2024-07-24 21:37:15.465914] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:30.623 [2024-07-24 21:37:15.539866] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:31.190 21:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:31.190 21:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:15:31.190 21:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:31.190 21:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:31.190 21:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:31.449 21:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:31.449 21:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:15:31.449 21:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.449 21:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:31.449 [2024-07-24 21:37:16.238359] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:31.449 [2024-07-24 21:37:16.246489] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:15:31.449 null0 00:15:31.449 [2024-07-24 21:37:16.278367] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:31.449 21:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.449 21:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=76629 00:15:31.449 21:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 76629 /tmp/host.sock 00:15:31.449 21:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:15:31.449 21:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 76629 ']' 00:15:31.449 21:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:15:31.449 21:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:31.449 21:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:15:31.449 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:15:31.449 21:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:31.449 21:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:31.449 [2024-07-24 21:37:16.361553] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:15:31.449 [2024-07-24 21:37:16.361961] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76629 ] 00:15:31.707 [2024-07-24 21:37:16.505100] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:31.707 [2024-07-24 21:37:16.662727] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:32.658 21:37:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:32.658 21:37:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:15:32.658 21:37:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:32.658 21:37:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:15:32.658 21:37:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.658 21:37:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:32.658 21:37:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.658 21:37:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:15:32.658 21:37:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.658 21:37:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:32.658 [2024-07-24 21:37:17.416137] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:32.658 21:37:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.658 21:37:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:15:32.658 21:37:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.658 21:37:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:33.594 [2024-07-24 21:37:18.481700] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:15:33.594 [2024-07-24 21:37:18.481753] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:15:33.594 [2024-07-24 21:37:18.481771] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:15:33.594 [2024-07-24 21:37:18.487759] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:15:33.594 [2024-07-24 21:37:18.545255] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:15:33.594 [2024-07-24 21:37:18.545319] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:15:33.594 [2024-07-24 21:37:18.545351] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:15:33.594 [2024-07-24 21:37:18.545370] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:15:33.594 [2024-07-24 21:37:18.545397] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:15:33.594 21:37:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.594 21:37:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:15:33.594 21:37:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:33.594 [2024-07-24 21:37:18.550099] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1d46ef0 was disconnected and freed. delete nvme_qpair. 00:15:33.594 21:37:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:33.594 21:37:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:33.594 21:37:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:33.594 21:37:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.594 21:37:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:33.594 21:37:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:33.594 21:37:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.852 21:37:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:15:33.852 21:37:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:15:33.852 21:37:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:15:33.852 21:37:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:15:33.852 21:37:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:33.852 21:37:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:33.852 21:37:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.852 21:37:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:33.852 21:37:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:33.852 21:37:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:33.852 21:37:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:33.852 21:37:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.852 21:37:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:15:33.852 21:37:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:34.787 21:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:34.787 21:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:34.787 21:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:34.787 21:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:34.787 21:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.787 21:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:34.787 21:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:34.787 21:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.787 21:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:15:34.787 21:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:36.163 21:37:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:36.163 21:37:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:36.163 21:37:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.163 21:37:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:36.163 21:37:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:36.163 21:37:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:36.163 21:37:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:36.163 21:37:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.163 21:37:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:15:36.163 21:37:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:37.099 21:37:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:37.099 21:37:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:37.099 21:37:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:37.099 21:37:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:37.099 21:37:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:37.099 21:37:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.099 21:37:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:37.099 21:37:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.099 21:37:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:15:37.099 21:37:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:38.035 21:37:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:38.035 21:37:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:38.035 21:37:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:38.035 21:37:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:38.035 21:37:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:38.035 21:37:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.035 21:37:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:38.035 21:37:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.035 21:37:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:15:38.035 21:37:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:38.970 21:37:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:38.970 21:37:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:38.970 21:37:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:38.970 21:37:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:38.970 21:37:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.970 21:37:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:38.970 21:37:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:39.229 [2024-07-24 21:37:23.972880] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:15:39.229 [2024-07-24 21:37:23.973320] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:39.229 [2024-07-24 21:37:23.973533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.229 [2024-07-24 21:37:23.973755] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:39.229 [2024-07-24 21:37:23.973891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.229 [2024-07-24 21:37:23.973908] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:39.229 [2024-07-24 21:37:23.973921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.229 [2024-07-24 21:37:23.973932] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:39.229 [2024-07-24 21:37:23.973943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.229 [2024-07-24 21:37:23.973955] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:15:39.229 [2024-07-24 21:37:23.973964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.229 [2024-07-24 21:37:23.973975] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cacac0 is same with the state(5) to be set 00:15:39.229 [2024-07-24 21:37:23.982872] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cacac0 (9): Bad file descriptor 00:15:39.229 [2024-07-24 21:37:23.992906] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:15:39.229 21:37:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.229 21:37:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:15:39.229 21:37:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:40.165 [2024-07-24 21:37:25.024815] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:15:40.165 [2024-07-24 21:37:25.025007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cacac0 with addr=10.0.0.2, port=4420 00:15:40.165 [2024-07-24 21:37:25.025053] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cacac0 is same with the state(5) to be set 00:15:40.165 [2024-07-24 21:37:25.025178] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cacac0 (9): Bad file descriptor 00:15:40.165 [2024-07-24 21:37:25.025347] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:40.165 [2024-07-24 21:37:25.025436] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:15:40.165 [2024-07-24 21:37:25.025476] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:15:40.165 [2024-07-24 21:37:25.025511] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:15:40.165 [2024-07-24 21:37:25.025565] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:15:40.165 [2024-07-24 21:37:25.025594] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:15:40.165 21:37:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:40.165 21:37:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:40.165 21:37:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:40.165 21:37:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:40.165 21:37:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:40.165 21:37:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.165 21:37:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:40.165 21:37:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.165 21:37:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:15:40.165 21:37:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:41.101 [2024-07-24 21:37:26.025737] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:15:41.102 [2024-07-24 21:37:26.025834] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:15:41.102 [2024-07-24 21:37:26.025851] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:15:41.102 [2024-07-24 21:37:26.025861] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:15:41.102 [2024-07-24 21:37:26.025892] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:15:41.102 [2024-07-24 21:37:26.025933] bdev_nvme.c:6762:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:15:41.102 [2024-07-24 21:37:26.026024] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:41.102 [2024-07-24 21:37:26.026044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:41.102 [2024-07-24 21:37:26.026061] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:41.102 [2024-07-24 21:37:26.026071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:41.102 [2024-07-24 21:37:26.026098] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:41.102 [2024-07-24 21:37:26.026125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:41.102 [2024-07-24 21:37:26.026155] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:41.102 [2024-07-24 21:37:26.026166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:41.102 [2024-07-24 21:37:26.026177] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:15:41.102 [2024-07-24 21:37:26.026187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:41.102 [2024-07-24 21:37:26.026198] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:15:41.102 [2024-07-24 21:37:26.026226] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb0860 (9): Bad file descriptor 00:15:41.102 [2024-07-24 21:37:26.026912] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:15:41.102 [2024-07-24 21:37:26.026932] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:15:41.102 21:37:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:41.102 21:37:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:41.102 21:37:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:41.102 21:37:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:41.102 21:37:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.102 21:37:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:41.102 21:37:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:41.102 21:37:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.361 21:37:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:15:41.361 21:37:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:41.361 21:37:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:41.361 21:37:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:15:41.361 21:37:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:41.361 21:37:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:41.361 21:37:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:41.361 21:37:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.361 21:37:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:41.361 21:37:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:41.361 21:37:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:41.361 21:37:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.361 21:37:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:15:41.361 21:37:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:42.317 21:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:42.317 21:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:42.317 21:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:42.317 21:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.317 21:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:42.317 21:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:42.317 21:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:42.317 21:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.317 21:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:15:42.317 21:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:43.263 [2024-07-24 21:37:28.036582] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:15:43.263 [2024-07-24 21:37:28.036658] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:15:43.263 [2024-07-24 21:37:28.036681] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:15:43.263 [2024-07-24 21:37:28.042617] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:15:43.263 [2024-07-24 21:37:28.099933] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:15:43.263 [2024-07-24 21:37:28.100377] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:15:43.263 [2024-07-24 21:37:28.100460] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:15:43.263 [2024-07-24 21:37:28.100586] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:15:43.263 [2024-07-24 21:37:28.100697] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:15:43.263 [2024-07-24 21:37:28.105186] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1d24460 was disconnected and freed. delete nvme_qpair. 00:15:43.522 21:37:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:43.522 21:37:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:43.522 21:37:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:43.522 21:37:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:43.522 21:37:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:43.522 21:37:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.522 21:37:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:43.522 21:37:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.522 21:37:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:15:43.522 21:37:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:15:43.522 21:37:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 76629 00:15:43.522 21:37:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 76629 ']' 00:15:43.522 21:37:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 76629 00:15:43.522 21:37:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:15:43.522 21:37:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:43.522 21:37:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76629 00:15:43.522 killing process with pid 76629 00:15:43.522 21:37:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:43.522 21:37:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:43.522 21:37:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76629' 00:15:43.522 21:37:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 76629 00:15:43.522 21:37:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 76629 00:15:43.782 21:37:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:15:43.782 21:37:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:43.782 21:37:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:15:43.782 21:37:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:43.782 21:37:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:15:43.782 21:37:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:43.782 21:37:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:43.782 rmmod nvme_tcp 00:15:44.041 rmmod nvme_fabrics 00:15:44.041 rmmod nvme_keyring 00:15:44.041 21:37:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:44.041 21:37:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:15:44.041 21:37:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:15:44.041 21:37:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 76597 ']' 00:15:44.041 21:37:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 76597 00:15:44.041 21:37:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 76597 ']' 00:15:44.041 21:37:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 76597 00:15:44.041 21:37:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:15:44.041 21:37:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:44.041 21:37:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76597 00:15:44.041 killing process with pid 76597 00:15:44.041 21:37:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:15:44.041 21:37:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:15:44.041 21:37:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76597' 00:15:44.041 21:37:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 76597 00:15:44.041 21:37:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 76597 00:15:44.301 21:37:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:44.301 21:37:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:44.301 21:37:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:44.301 21:37:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:44.301 21:37:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:44.301 21:37:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:44.301 21:37:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:44.301 21:37:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:44.301 21:37:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:44.301 00:15:44.301 real 0m14.523s 00:15:44.301 user 0m25.167s 00:15:44.301 sys 0m2.517s 00:15:44.301 21:37:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:44.301 ************************************ 00:15:44.301 END TEST nvmf_discovery_remove_ifc 00:15:44.301 ************************************ 00:15:44.301 21:37:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:44.301 21:37:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:15:44.301 21:37:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:44.301 21:37:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:44.301 21:37:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:15:44.301 ************************************ 00:15:44.301 START TEST nvmf_identify_kernel_target 00:15:44.301 ************************************ 00:15:44.301 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:15:44.301 * Looking for test storage... 00:15:44.301 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:44.301 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:44.301 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:15:44.561 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:44.561 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:44.561 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:44.561 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:44.561 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:44.561 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:44.561 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:44.561 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:44.561 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:44.561 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:44.561 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:15:44.561 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:15:44.561 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:44.561 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:44.561 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:44.561 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:44.561 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:44.561 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:44.561 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:44.561 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:44.561 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:44.561 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:44.561 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:44.561 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:15:44.561 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:44.561 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:15:44.561 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:44.561 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:44.561 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:44.561 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:44.561 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:44.561 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:44.561 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:44.561 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:44.561 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:15:44.561 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:44.561 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:44.561 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:44.561 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:44.561 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:44.561 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:44.561 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:44.561 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:44.561 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:44.561 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:44.561 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:44.561 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:44.561 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:44.561 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:44.561 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:44.561 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:44.561 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:44.561 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:44.561 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:44.561 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:44.561 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:44.561 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:44.561 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:44.561 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:44.561 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:44.561 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:44.561 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:44.561 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:44.561 Cannot find device "nvmf_tgt_br" 00:15:44.561 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # true 00:15:44.561 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:44.561 Cannot find device "nvmf_tgt_br2" 00:15:44.561 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # true 00:15:44.561 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:44.561 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:44.561 Cannot find device "nvmf_tgt_br" 00:15:44.561 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # true 00:15:44.561 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:44.561 Cannot find device "nvmf_tgt_br2" 00:15:44.562 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # true 00:15:44.562 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:44.562 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:44.562 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:44.562 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:44.562 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:15:44.562 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:44.562 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:44.562 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:15:44.562 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:44.562 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:44.562 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:44.562 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:44.562 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:44.562 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:44.562 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:44.562 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:44.562 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:44.562 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:44.562 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:44.562 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:44.562 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:44.821 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:44.821 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:44.821 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:44.821 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:44.821 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:44.821 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:44.821 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:44.822 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:44.822 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:44.822 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:44.822 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:44.822 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:44.822 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:15:44.822 00:15:44.822 --- 10.0.0.2 ping statistics --- 00:15:44.822 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:44.822 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:15:44.822 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:44.822 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:44.822 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:15:44.822 00:15:44.822 --- 10.0.0.3 ping statistics --- 00:15:44.822 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:44.822 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:15:44.822 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:44.822 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:44.822 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.089 ms 00:15:44.822 00:15:44.822 --- 10.0.0.1 ping statistics --- 00:15:44.822 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:44.822 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:15:44.822 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:44.822 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@433 -- # return 0 00:15:44.822 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:44.822 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:44.822 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:44.822 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:44.822 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:44.822 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:44.822 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:44.822 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:15:44.822 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:15:44.822 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:15:44.822 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:15:44.822 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:15:44.822 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:44.822 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:44.822 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:15:44.822 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:44.822 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:15:44.822 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:15:44.822 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:15:44.822 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:15:44.822 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:15:44.822 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:15:44.822 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:15:44.822 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:15:44.822 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:15:44.822 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:15:44.822 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:15:44.822 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:15:44.822 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:15:44.822 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:15:44.822 21:37:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:15:45.081 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:45.340 Waiting for block devices as requested 00:15:45.340 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:15:45.340 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:15:45.340 21:37:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:15:45.340 21:37:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:15:45.340 21:37:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:15:45.340 21:37:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:15:45.340 21:37:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:15:45.340 21:37:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:15:45.340 21:37:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:15:45.340 21:37:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:15:45.340 21:37:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:15:45.599 No valid GPT data, bailing 00:15:45.599 21:37:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:15:45.599 21:37:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:15:45.599 21:37:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:15:45.599 21:37:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:15:45.599 21:37:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:15:45.599 21:37:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:15:45.599 21:37:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:15:45.599 21:37:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:15:45.599 21:37:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:15:45.599 21:37:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:15:45.599 21:37:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:15:45.599 21:37:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:15:45.599 21:37:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:15:45.600 No valid GPT data, bailing 00:15:45.600 21:37:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:15:45.600 21:37:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:15:45.600 21:37:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:15:45.600 21:37:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:15:45.600 21:37:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:15:45.600 21:37:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:15:45.600 21:37:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:15:45.600 21:37:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:15:45.600 21:37:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:15:45.600 21:37:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:15:45.600 21:37:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:15:45.600 21:37:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:15:45.600 21:37:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:15:45.600 No valid GPT data, bailing 00:15:45.600 21:37:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:15:45.600 21:37:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:15:45.600 21:37:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:15:45.600 21:37:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:15:45.600 21:37:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:15:45.600 21:37:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:15:45.600 21:37:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:15:45.600 21:37:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:15:45.600 21:37:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:15:45.600 21:37:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:15:45.600 21:37:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:15:45.600 21:37:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:15:45.600 21:37:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:15:45.858 No valid GPT data, bailing 00:15:45.858 21:37:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:15:45.858 21:37:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:15:45.858 21:37:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:15:45.858 21:37:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:15:45.858 21:37:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:15:45.858 21:37:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:15:45.858 21:37:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:15:45.858 21:37:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:15:45.858 21:37:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:15:45.858 21:37:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:15:45.858 21:37:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:15:45.858 21:37:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:15:45.858 21:37:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:15:45.858 21:37:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:15:45.858 21:37:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:15:45.858 21:37:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:15:45.858 21:37:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:15:45.858 21:37:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --hostid=987211d5-ddc7-4d0a-8ba2-cf43288d1158 -a 10.0.0.1 -t tcp -s 4420 00:15:45.858 00:15:45.858 Discovery Log Number of Records 2, Generation counter 2 00:15:45.858 =====Discovery Log Entry 0====== 00:15:45.858 trtype: tcp 00:15:45.858 adrfam: ipv4 00:15:45.858 subtype: current discovery subsystem 00:15:45.858 treq: not specified, sq flow control disable supported 00:15:45.858 portid: 1 00:15:45.858 trsvcid: 4420 00:15:45.858 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:15:45.858 traddr: 10.0.0.1 00:15:45.858 eflags: none 00:15:45.858 sectype: none 00:15:45.858 =====Discovery Log Entry 1====== 00:15:45.858 trtype: tcp 00:15:45.858 adrfam: ipv4 00:15:45.858 subtype: nvme subsystem 00:15:45.858 treq: not specified, sq flow control disable supported 00:15:45.858 portid: 1 00:15:45.858 trsvcid: 4420 00:15:45.858 subnqn: nqn.2016-06.io.spdk:testnqn 00:15:45.858 traddr: 10.0.0.1 00:15:45.858 eflags: none 00:15:45.858 sectype: none 00:15:45.858 21:37:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:15:45.858 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:15:46.116 ===================================================== 00:15:46.116 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:15:46.116 ===================================================== 00:15:46.116 Controller Capabilities/Features 00:15:46.116 ================================ 00:15:46.116 Vendor ID: 0000 00:15:46.116 Subsystem Vendor ID: 0000 00:15:46.116 Serial Number: 6a9ed0fdb4dc8fb4bced 00:15:46.116 Model Number: Linux 00:15:46.116 Firmware Version: 6.7.0-68 00:15:46.116 Recommended Arb Burst: 0 00:15:46.116 IEEE OUI Identifier: 00 00 00 00:15:46.116 Multi-path I/O 00:15:46.116 May have multiple subsystem ports: No 00:15:46.116 May have multiple controllers: No 00:15:46.116 Associated with SR-IOV VF: No 00:15:46.116 Max Data Transfer Size: Unlimited 00:15:46.116 Max Number of Namespaces: 0 00:15:46.116 Max Number of I/O Queues: 1024 00:15:46.116 NVMe Specification Version (VS): 1.3 00:15:46.116 NVMe Specification Version (Identify): 1.3 00:15:46.116 Maximum Queue Entries: 1024 00:15:46.116 Contiguous Queues Required: No 00:15:46.116 Arbitration Mechanisms Supported 00:15:46.116 Weighted Round Robin: Not Supported 00:15:46.116 Vendor Specific: Not Supported 00:15:46.116 Reset Timeout: 7500 ms 00:15:46.116 Doorbell Stride: 4 bytes 00:15:46.116 NVM Subsystem Reset: Not Supported 00:15:46.116 Command Sets Supported 00:15:46.116 NVM Command Set: Supported 00:15:46.116 Boot Partition: Not Supported 00:15:46.116 Memory Page Size Minimum: 4096 bytes 00:15:46.116 Memory Page Size Maximum: 4096 bytes 00:15:46.116 Persistent Memory Region: Not Supported 00:15:46.116 Optional Asynchronous Events Supported 00:15:46.116 Namespace Attribute Notices: Not Supported 00:15:46.116 Firmware Activation Notices: Not Supported 00:15:46.116 ANA Change Notices: Not Supported 00:15:46.116 PLE Aggregate Log Change Notices: Not Supported 00:15:46.116 LBA Status Info Alert Notices: Not Supported 00:15:46.116 EGE Aggregate Log Change Notices: Not Supported 00:15:46.116 Normal NVM Subsystem Shutdown event: Not Supported 00:15:46.116 Zone Descriptor Change Notices: Not Supported 00:15:46.116 Discovery Log Change Notices: Supported 00:15:46.116 Controller Attributes 00:15:46.116 128-bit Host Identifier: Not Supported 00:15:46.116 Non-Operational Permissive Mode: Not Supported 00:15:46.116 NVM Sets: Not Supported 00:15:46.116 Read Recovery Levels: Not Supported 00:15:46.116 Endurance Groups: Not Supported 00:15:46.116 Predictable Latency Mode: Not Supported 00:15:46.116 Traffic Based Keep ALive: Not Supported 00:15:46.116 Namespace Granularity: Not Supported 00:15:46.116 SQ Associations: Not Supported 00:15:46.116 UUID List: Not Supported 00:15:46.116 Multi-Domain Subsystem: Not Supported 00:15:46.116 Fixed Capacity Management: Not Supported 00:15:46.116 Variable Capacity Management: Not Supported 00:15:46.116 Delete Endurance Group: Not Supported 00:15:46.116 Delete NVM Set: Not Supported 00:15:46.116 Extended LBA Formats Supported: Not Supported 00:15:46.116 Flexible Data Placement Supported: Not Supported 00:15:46.116 00:15:46.116 Controller Memory Buffer Support 00:15:46.116 ================================ 00:15:46.116 Supported: No 00:15:46.116 00:15:46.116 Persistent Memory Region Support 00:15:46.116 ================================ 00:15:46.116 Supported: No 00:15:46.116 00:15:46.116 Admin Command Set Attributes 00:15:46.116 ============================ 00:15:46.116 Security Send/Receive: Not Supported 00:15:46.116 Format NVM: Not Supported 00:15:46.116 Firmware Activate/Download: Not Supported 00:15:46.116 Namespace Management: Not Supported 00:15:46.116 Device Self-Test: Not Supported 00:15:46.116 Directives: Not Supported 00:15:46.116 NVMe-MI: Not Supported 00:15:46.116 Virtualization Management: Not Supported 00:15:46.116 Doorbell Buffer Config: Not Supported 00:15:46.116 Get LBA Status Capability: Not Supported 00:15:46.116 Command & Feature Lockdown Capability: Not Supported 00:15:46.116 Abort Command Limit: 1 00:15:46.116 Async Event Request Limit: 1 00:15:46.116 Number of Firmware Slots: N/A 00:15:46.116 Firmware Slot 1 Read-Only: N/A 00:15:46.116 Firmware Activation Without Reset: N/A 00:15:46.116 Multiple Update Detection Support: N/A 00:15:46.116 Firmware Update Granularity: No Information Provided 00:15:46.116 Per-Namespace SMART Log: No 00:15:46.116 Asymmetric Namespace Access Log Page: Not Supported 00:15:46.116 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:15:46.116 Command Effects Log Page: Not Supported 00:15:46.116 Get Log Page Extended Data: Supported 00:15:46.116 Telemetry Log Pages: Not Supported 00:15:46.116 Persistent Event Log Pages: Not Supported 00:15:46.116 Supported Log Pages Log Page: May Support 00:15:46.116 Commands Supported & Effects Log Page: Not Supported 00:15:46.116 Feature Identifiers & Effects Log Page:May Support 00:15:46.116 NVMe-MI Commands & Effects Log Page: May Support 00:15:46.116 Data Area 4 for Telemetry Log: Not Supported 00:15:46.116 Error Log Page Entries Supported: 1 00:15:46.116 Keep Alive: Not Supported 00:15:46.116 00:15:46.116 NVM Command Set Attributes 00:15:46.116 ========================== 00:15:46.116 Submission Queue Entry Size 00:15:46.116 Max: 1 00:15:46.116 Min: 1 00:15:46.116 Completion Queue Entry Size 00:15:46.116 Max: 1 00:15:46.116 Min: 1 00:15:46.116 Number of Namespaces: 0 00:15:46.116 Compare Command: Not Supported 00:15:46.116 Write Uncorrectable Command: Not Supported 00:15:46.116 Dataset Management Command: Not Supported 00:15:46.116 Write Zeroes Command: Not Supported 00:15:46.116 Set Features Save Field: Not Supported 00:15:46.116 Reservations: Not Supported 00:15:46.116 Timestamp: Not Supported 00:15:46.116 Copy: Not Supported 00:15:46.116 Volatile Write Cache: Not Present 00:15:46.116 Atomic Write Unit (Normal): 1 00:15:46.116 Atomic Write Unit (PFail): 1 00:15:46.116 Atomic Compare & Write Unit: 1 00:15:46.116 Fused Compare & Write: Not Supported 00:15:46.116 Scatter-Gather List 00:15:46.116 SGL Command Set: Supported 00:15:46.116 SGL Keyed: Not Supported 00:15:46.116 SGL Bit Bucket Descriptor: Not Supported 00:15:46.116 SGL Metadata Pointer: Not Supported 00:15:46.116 Oversized SGL: Not Supported 00:15:46.116 SGL Metadata Address: Not Supported 00:15:46.116 SGL Offset: Supported 00:15:46.116 Transport SGL Data Block: Not Supported 00:15:46.116 Replay Protected Memory Block: Not Supported 00:15:46.116 00:15:46.116 Firmware Slot Information 00:15:46.116 ========================= 00:15:46.116 Active slot: 0 00:15:46.116 00:15:46.116 00:15:46.116 Error Log 00:15:46.116 ========= 00:15:46.116 00:15:46.116 Active Namespaces 00:15:46.116 ================= 00:15:46.116 Discovery Log Page 00:15:46.116 ================== 00:15:46.116 Generation Counter: 2 00:15:46.116 Number of Records: 2 00:15:46.116 Record Format: 0 00:15:46.116 00:15:46.116 Discovery Log Entry 0 00:15:46.116 ---------------------- 00:15:46.116 Transport Type: 3 (TCP) 00:15:46.116 Address Family: 1 (IPv4) 00:15:46.116 Subsystem Type: 3 (Current Discovery Subsystem) 00:15:46.116 Entry Flags: 00:15:46.116 Duplicate Returned Information: 0 00:15:46.116 Explicit Persistent Connection Support for Discovery: 0 00:15:46.116 Transport Requirements: 00:15:46.116 Secure Channel: Not Specified 00:15:46.116 Port ID: 1 (0x0001) 00:15:46.116 Controller ID: 65535 (0xffff) 00:15:46.116 Admin Max SQ Size: 32 00:15:46.116 Transport Service Identifier: 4420 00:15:46.116 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:15:46.116 Transport Address: 10.0.0.1 00:15:46.116 Discovery Log Entry 1 00:15:46.116 ---------------------- 00:15:46.116 Transport Type: 3 (TCP) 00:15:46.116 Address Family: 1 (IPv4) 00:15:46.116 Subsystem Type: 2 (NVM Subsystem) 00:15:46.116 Entry Flags: 00:15:46.116 Duplicate Returned Information: 0 00:15:46.116 Explicit Persistent Connection Support for Discovery: 0 00:15:46.116 Transport Requirements: 00:15:46.116 Secure Channel: Not Specified 00:15:46.116 Port ID: 1 (0x0001) 00:15:46.116 Controller ID: 65535 (0xffff) 00:15:46.116 Admin Max SQ Size: 32 00:15:46.116 Transport Service Identifier: 4420 00:15:46.116 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:15:46.116 Transport Address: 10.0.0.1 00:15:46.116 21:37:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:15:46.116 get_feature(0x01) failed 00:15:46.117 get_feature(0x02) failed 00:15:46.117 get_feature(0x04) failed 00:15:46.117 ===================================================== 00:15:46.117 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:15:46.117 ===================================================== 00:15:46.117 Controller Capabilities/Features 00:15:46.117 ================================ 00:15:46.117 Vendor ID: 0000 00:15:46.117 Subsystem Vendor ID: 0000 00:15:46.117 Serial Number: bcc784f0601528c96ee1 00:15:46.117 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:15:46.117 Firmware Version: 6.7.0-68 00:15:46.117 Recommended Arb Burst: 6 00:15:46.117 IEEE OUI Identifier: 00 00 00 00:15:46.117 Multi-path I/O 00:15:46.117 May have multiple subsystem ports: Yes 00:15:46.117 May have multiple controllers: Yes 00:15:46.117 Associated with SR-IOV VF: No 00:15:46.117 Max Data Transfer Size: Unlimited 00:15:46.117 Max Number of Namespaces: 1024 00:15:46.117 Max Number of I/O Queues: 128 00:15:46.117 NVMe Specification Version (VS): 1.3 00:15:46.117 NVMe Specification Version (Identify): 1.3 00:15:46.117 Maximum Queue Entries: 1024 00:15:46.117 Contiguous Queues Required: No 00:15:46.117 Arbitration Mechanisms Supported 00:15:46.117 Weighted Round Robin: Not Supported 00:15:46.117 Vendor Specific: Not Supported 00:15:46.117 Reset Timeout: 7500 ms 00:15:46.117 Doorbell Stride: 4 bytes 00:15:46.117 NVM Subsystem Reset: Not Supported 00:15:46.117 Command Sets Supported 00:15:46.117 NVM Command Set: Supported 00:15:46.117 Boot Partition: Not Supported 00:15:46.117 Memory Page Size Minimum: 4096 bytes 00:15:46.117 Memory Page Size Maximum: 4096 bytes 00:15:46.117 Persistent Memory Region: Not Supported 00:15:46.117 Optional Asynchronous Events Supported 00:15:46.117 Namespace Attribute Notices: Supported 00:15:46.117 Firmware Activation Notices: Not Supported 00:15:46.117 ANA Change Notices: Supported 00:15:46.117 PLE Aggregate Log Change Notices: Not Supported 00:15:46.117 LBA Status Info Alert Notices: Not Supported 00:15:46.117 EGE Aggregate Log Change Notices: Not Supported 00:15:46.117 Normal NVM Subsystem Shutdown event: Not Supported 00:15:46.117 Zone Descriptor Change Notices: Not Supported 00:15:46.117 Discovery Log Change Notices: Not Supported 00:15:46.117 Controller Attributes 00:15:46.117 128-bit Host Identifier: Supported 00:15:46.117 Non-Operational Permissive Mode: Not Supported 00:15:46.117 NVM Sets: Not Supported 00:15:46.117 Read Recovery Levels: Not Supported 00:15:46.117 Endurance Groups: Not Supported 00:15:46.117 Predictable Latency Mode: Not Supported 00:15:46.117 Traffic Based Keep ALive: Supported 00:15:46.117 Namespace Granularity: Not Supported 00:15:46.117 SQ Associations: Not Supported 00:15:46.117 UUID List: Not Supported 00:15:46.117 Multi-Domain Subsystem: Not Supported 00:15:46.117 Fixed Capacity Management: Not Supported 00:15:46.117 Variable Capacity Management: Not Supported 00:15:46.117 Delete Endurance Group: Not Supported 00:15:46.117 Delete NVM Set: Not Supported 00:15:46.117 Extended LBA Formats Supported: Not Supported 00:15:46.117 Flexible Data Placement Supported: Not Supported 00:15:46.117 00:15:46.117 Controller Memory Buffer Support 00:15:46.117 ================================ 00:15:46.117 Supported: No 00:15:46.117 00:15:46.117 Persistent Memory Region Support 00:15:46.117 ================================ 00:15:46.117 Supported: No 00:15:46.117 00:15:46.117 Admin Command Set Attributes 00:15:46.117 ============================ 00:15:46.117 Security Send/Receive: Not Supported 00:15:46.117 Format NVM: Not Supported 00:15:46.117 Firmware Activate/Download: Not Supported 00:15:46.117 Namespace Management: Not Supported 00:15:46.117 Device Self-Test: Not Supported 00:15:46.117 Directives: Not Supported 00:15:46.117 NVMe-MI: Not Supported 00:15:46.117 Virtualization Management: Not Supported 00:15:46.117 Doorbell Buffer Config: Not Supported 00:15:46.117 Get LBA Status Capability: Not Supported 00:15:46.117 Command & Feature Lockdown Capability: Not Supported 00:15:46.117 Abort Command Limit: 4 00:15:46.117 Async Event Request Limit: 4 00:15:46.117 Number of Firmware Slots: N/A 00:15:46.117 Firmware Slot 1 Read-Only: N/A 00:15:46.117 Firmware Activation Without Reset: N/A 00:15:46.117 Multiple Update Detection Support: N/A 00:15:46.117 Firmware Update Granularity: No Information Provided 00:15:46.117 Per-Namespace SMART Log: Yes 00:15:46.117 Asymmetric Namespace Access Log Page: Supported 00:15:46.117 ANA Transition Time : 10 sec 00:15:46.117 00:15:46.117 Asymmetric Namespace Access Capabilities 00:15:46.117 ANA Optimized State : Supported 00:15:46.117 ANA Non-Optimized State : Supported 00:15:46.117 ANA Inaccessible State : Supported 00:15:46.117 ANA Persistent Loss State : Supported 00:15:46.117 ANA Change State : Supported 00:15:46.117 ANAGRPID is not changed : No 00:15:46.117 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:15:46.117 00:15:46.117 ANA Group Identifier Maximum : 128 00:15:46.117 Number of ANA Group Identifiers : 128 00:15:46.117 Max Number of Allowed Namespaces : 1024 00:15:46.117 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:15:46.117 Command Effects Log Page: Supported 00:15:46.117 Get Log Page Extended Data: Supported 00:15:46.117 Telemetry Log Pages: Not Supported 00:15:46.117 Persistent Event Log Pages: Not Supported 00:15:46.117 Supported Log Pages Log Page: May Support 00:15:46.117 Commands Supported & Effects Log Page: Not Supported 00:15:46.117 Feature Identifiers & Effects Log Page:May Support 00:15:46.117 NVMe-MI Commands & Effects Log Page: May Support 00:15:46.117 Data Area 4 for Telemetry Log: Not Supported 00:15:46.117 Error Log Page Entries Supported: 128 00:15:46.117 Keep Alive: Supported 00:15:46.117 Keep Alive Granularity: 1000 ms 00:15:46.117 00:15:46.117 NVM Command Set Attributes 00:15:46.117 ========================== 00:15:46.117 Submission Queue Entry Size 00:15:46.117 Max: 64 00:15:46.117 Min: 64 00:15:46.117 Completion Queue Entry Size 00:15:46.117 Max: 16 00:15:46.117 Min: 16 00:15:46.117 Number of Namespaces: 1024 00:15:46.117 Compare Command: Not Supported 00:15:46.117 Write Uncorrectable Command: Not Supported 00:15:46.117 Dataset Management Command: Supported 00:15:46.117 Write Zeroes Command: Supported 00:15:46.117 Set Features Save Field: Not Supported 00:15:46.117 Reservations: Not Supported 00:15:46.117 Timestamp: Not Supported 00:15:46.117 Copy: Not Supported 00:15:46.117 Volatile Write Cache: Present 00:15:46.117 Atomic Write Unit (Normal): 1 00:15:46.117 Atomic Write Unit (PFail): 1 00:15:46.117 Atomic Compare & Write Unit: 1 00:15:46.117 Fused Compare & Write: Not Supported 00:15:46.117 Scatter-Gather List 00:15:46.117 SGL Command Set: Supported 00:15:46.117 SGL Keyed: Not Supported 00:15:46.117 SGL Bit Bucket Descriptor: Not Supported 00:15:46.117 SGL Metadata Pointer: Not Supported 00:15:46.117 Oversized SGL: Not Supported 00:15:46.117 SGL Metadata Address: Not Supported 00:15:46.117 SGL Offset: Supported 00:15:46.117 Transport SGL Data Block: Not Supported 00:15:46.117 Replay Protected Memory Block: Not Supported 00:15:46.117 00:15:46.117 Firmware Slot Information 00:15:46.117 ========================= 00:15:46.117 Active slot: 0 00:15:46.117 00:15:46.117 Asymmetric Namespace Access 00:15:46.117 =========================== 00:15:46.117 Change Count : 0 00:15:46.117 Number of ANA Group Descriptors : 1 00:15:46.117 ANA Group Descriptor : 0 00:15:46.117 ANA Group ID : 1 00:15:46.117 Number of NSID Values : 1 00:15:46.117 Change Count : 0 00:15:46.117 ANA State : 1 00:15:46.117 Namespace Identifier : 1 00:15:46.117 00:15:46.117 Commands Supported and Effects 00:15:46.117 ============================== 00:15:46.117 Admin Commands 00:15:46.117 -------------- 00:15:46.117 Get Log Page (02h): Supported 00:15:46.117 Identify (06h): Supported 00:15:46.117 Abort (08h): Supported 00:15:46.117 Set Features (09h): Supported 00:15:46.117 Get Features (0Ah): Supported 00:15:46.117 Asynchronous Event Request (0Ch): Supported 00:15:46.117 Keep Alive (18h): Supported 00:15:46.117 I/O Commands 00:15:46.117 ------------ 00:15:46.117 Flush (00h): Supported 00:15:46.117 Write (01h): Supported LBA-Change 00:15:46.117 Read (02h): Supported 00:15:46.117 Write Zeroes (08h): Supported LBA-Change 00:15:46.117 Dataset Management (09h): Supported 00:15:46.117 00:15:46.117 Error Log 00:15:46.117 ========= 00:15:46.117 Entry: 0 00:15:46.117 Error Count: 0x3 00:15:46.117 Submission Queue Id: 0x0 00:15:46.117 Command Id: 0x5 00:15:46.117 Phase Bit: 0 00:15:46.117 Status Code: 0x2 00:15:46.117 Status Code Type: 0x0 00:15:46.118 Do Not Retry: 1 00:15:46.118 Error Location: 0x28 00:15:46.118 LBA: 0x0 00:15:46.118 Namespace: 0x0 00:15:46.118 Vendor Log Page: 0x0 00:15:46.118 ----------- 00:15:46.118 Entry: 1 00:15:46.118 Error Count: 0x2 00:15:46.118 Submission Queue Id: 0x0 00:15:46.118 Command Id: 0x5 00:15:46.118 Phase Bit: 0 00:15:46.118 Status Code: 0x2 00:15:46.118 Status Code Type: 0x0 00:15:46.118 Do Not Retry: 1 00:15:46.118 Error Location: 0x28 00:15:46.118 LBA: 0x0 00:15:46.118 Namespace: 0x0 00:15:46.118 Vendor Log Page: 0x0 00:15:46.118 ----------- 00:15:46.118 Entry: 2 00:15:46.118 Error Count: 0x1 00:15:46.118 Submission Queue Id: 0x0 00:15:46.118 Command Id: 0x4 00:15:46.118 Phase Bit: 0 00:15:46.118 Status Code: 0x2 00:15:46.118 Status Code Type: 0x0 00:15:46.118 Do Not Retry: 1 00:15:46.118 Error Location: 0x28 00:15:46.118 LBA: 0x0 00:15:46.118 Namespace: 0x0 00:15:46.118 Vendor Log Page: 0x0 00:15:46.118 00:15:46.118 Number of Queues 00:15:46.118 ================ 00:15:46.118 Number of I/O Submission Queues: 128 00:15:46.118 Number of I/O Completion Queues: 128 00:15:46.118 00:15:46.118 ZNS Specific Controller Data 00:15:46.118 ============================ 00:15:46.118 Zone Append Size Limit: 0 00:15:46.118 00:15:46.118 00:15:46.118 Active Namespaces 00:15:46.118 ================= 00:15:46.118 get_feature(0x05) failed 00:15:46.118 Namespace ID:1 00:15:46.118 Command Set Identifier: NVM (00h) 00:15:46.118 Deallocate: Supported 00:15:46.118 Deallocated/Unwritten Error: Not Supported 00:15:46.118 Deallocated Read Value: Unknown 00:15:46.118 Deallocate in Write Zeroes: Not Supported 00:15:46.118 Deallocated Guard Field: 0xFFFF 00:15:46.118 Flush: Supported 00:15:46.118 Reservation: Not Supported 00:15:46.118 Namespace Sharing Capabilities: Multiple Controllers 00:15:46.118 Size (in LBAs): 1310720 (5GiB) 00:15:46.118 Capacity (in LBAs): 1310720 (5GiB) 00:15:46.118 Utilization (in LBAs): 1310720 (5GiB) 00:15:46.118 UUID: 20daa3b7-3356-446a-ba92-ca136b845077 00:15:46.118 Thin Provisioning: Not Supported 00:15:46.118 Per-NS Atomic Units: Yes 00:15:46.118 Atomic Boundary Size (Normal): 0 00:15:46.118 Atomic Boundary Size (PFail): 0 00:15:46.118 Atomic Boundary Offset: 0 00:15:46.118 NGUID/EUI64 Never Reused: No 00:15:46.118 ANA group ID: 1 00:15:46.118 Namespace Write Protected: No 00:15:46.118 Number of LBA Formats: 1 00:15:46.118 Current LBA Format: LBA Format #00 00:15:46.118 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:15:46.118 00:15:46.118 21:37:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:15:46.118 21:37:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:46.118 21:37:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:15:46.118 21:37:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:46.118 21:37:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:15:46.118 21:37:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:46.118 21:37:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:46.118 rmmod nvme_tcp 00:15:46.376 rmmod nvme_fabrics 00:15:46.376 21:37:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:46.376 21:37:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:15:46.376 21:37:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:15:46.376 21:37:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:15:46.376 21:37:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:46.376 21:37:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:46.376 21:37:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:46.376 21:37:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:46.376 21:37:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:46.377 21:37:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:46.377 21:37:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:46.377 21:37:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:46.377 21:37:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:46.377 21:37:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:15:46.377 21:37:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:15:46.377 21:37:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:15:46.377 21:37:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:15:46.377 21:37:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:15:46.377 21:37:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:15:46.377 21:37:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:15:46.377 21:37:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:15:46.377 21:37:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:15:46.377 21:37:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:47.313 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:47.313 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:15:47.313 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:15:47.313 00:15:47.313 real 0m2.975s 00:15:47.313 user 0m1.062s 00:15:47.313 sys 0m1.399s 00:15:47.313 21:37:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:47.313 21:37:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.313 ************************************ 00:15:47.313 END TEST nvmf_identify_kernel_target 00:15:47.313 ************************************ 00:15:47.313 21:37:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:15:47.313 21:37:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:47.313 21:37:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:47.313 21:37:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:15:47.313 ************************************ 00:15:47.313 START TEST nvmf_auth_host 00:15:47.313 ************************************ 00:15:47.313 21:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:15:47.571 * Looking for test storage... 00:15:47.571 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:47.571 21:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:47.571 21:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:15:47.571 21:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:47.571 21:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:47.571 21:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:47.571 21:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:47.571 21:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:47.571 21:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:47.571 21:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:47.571 21:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:47.571 21:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:47.571 21:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:47.571 21:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:15:47.571 21:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:15:47.571 21:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:47.571 21:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:47.571 21:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:47.571 21:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:47.571 21:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:47.571 21:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:47.571 21:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:47.571 21:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:47.571 21:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:47.571 21:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:47.571 21:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:47.571 21:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:15:47.571 21:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:47.571 21:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:15:47.571 21:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:47.571 21:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:47.571 21:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:47.572 21:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:47.572 21:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:47.572 21:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:47.572 21:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:47.572 21:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:47.572 21:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:15:47.572 21:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:15:47.572 21:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:15:47.572 21:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:15:47.572 21:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:15:47.572 21:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:15:47.572 21:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:15:47.572 21:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:15:47.572 21:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:15:47.572 21:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:47.572 21:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:47.572 21:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:47.572 21:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:47.572 21:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:47.572 21:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:47.572 21:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:47.572 21:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:47.572 21:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:47.572 21:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:47.572 21:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:47.572 21:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:47.572 21:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:47.572 21:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:47.572 21:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:47.572 21:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:47.572 21:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:47.572 21:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:47.572 21:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:47.572 21:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:47.572 21:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:47.572 21:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:47.572 21:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:47.572 21:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:47.572 21:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:47.572 21:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:47.572 21:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:47.572 21:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:47.572 Cannot find device "nvmf_tgt_br" 00:15:47.572 21:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@155 -- # true 00:15:47.572 21:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:47.572 Cannot find device "nvmf_tgt_br2" 00:15:47.572 21:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # true 00:15:47.572 21:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:47.572 21:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:47.572 Cannot find device "nvmf_tgt_br" 00:15:47.572 21:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@158 -- # true 00:15:47.572 21:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:47.572 Cannot find device "nvmf_tgt_br2" 00:15:47.572 21:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # true 00:15:47.572 21:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:47.572 21:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:47.572 21:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:47.572 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:47.572 21:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:15:47.572 21:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:47.572 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:47.572 21:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:15:47.572 21:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:47.572 21:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:47.572 21:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:47.831 21:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:47.831 21:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:47.831 21:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:47.831 21:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:47.831 21:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:47.831 21:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:47.831 21:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:47.831 21:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:47.831 21:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:47.831 21:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:47.831 21:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:47.831 21:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:47.831 21:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:47.831 21:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:47.831 21:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:47.831 21:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:47.831 21:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:47.831 21:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:47.831 21:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:47.831 21:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:47.831 21:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:47.831 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:47.831 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.110 ms 00:15:47.831 00:15:47.831 --- 10.0.0.2 ping statistics --- 00:15:47.831 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:47.831 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:15:47.831 21:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:47.831 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:47.831 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:15:47.831 00:15:47.831 --- 10.0.0.3 ping statistics --- 00:15:47.831 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:47.831 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:15:47.831 21:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:47.831 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:47.831 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.064 ms 00:15:47.831 00:15:47.831 --- 10.0.0.1 ping statistics --- 00:15:47.831 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:47.831 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:15:47.831 21:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:47.831 21:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@433 -- # return 0 00:15:47.831 21:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:47.831 21:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:47.831 21:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:47.831 21:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:47.831 21:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:47.831 21:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:47.831 21:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:47.831 21:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:15:47.831 21:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:47.831 21:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:47.831 21:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:47.831 21:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=77522 00:15:47.831 21:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 77522 00:15:47.831 21:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:15:47.831 21:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 77522 ']' 00:15:47.831 21:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:47.831 21:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:47.831 21:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:47.831 21:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:47.831 21:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:49.204 21:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:49.204 21:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:15:49.204 21:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:49.204 21:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:49.204 21:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:49.204 21:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:49.204 21:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:15:49.204 21:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:15:49.204 21:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:15:49.204 21:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:49.204 21:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:15:49.204 21:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:15:49.204 21:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:15:49.204 21:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:49.204 21:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=f05e5db08ab406ab70e70a7dc5f919b5 00:15:49.204 21:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:15:49.204 21:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.jzZ 00:15:49.205 21:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key f05e5db08ab406ab70e70a7dc5f919b5 0 00:15:49.205 21:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 f05e5db08ab406ab70e70a7dc5f919b5 0 00:15:49.205 21:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:15:49.205 21:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:49.205 21:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=f05e5db08ab406ab70e70a7dc5f919b5 00:15:49.205 21:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:15:49.205 21:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:15:49.205 21:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.jzZ 00:15:49.205 21:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.jzZ 00:15:49.205 21:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.jzZ 00:15:49.205 21:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:15:49.205 21:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:15:49.205 21:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:49.205 21:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:15:49.205 21:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:15:49.205 21:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:15:49.205 21:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:15:49.205 21:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=e3ba531f7c0b884243f37a2676a366a05848b44286c2e9cb55e3ae3ab415f591 00:15:49.205 21:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:15:49.205 21:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.XMy 00:15:49.205 21:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key e3ba531f7c0b884243f37a2676a366a05848b44286c2e9cb55e3ae3ab415f591 3 00:15:49.205 21:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 e3ba531f7c0b884243f37a2676a366a05848b44286c2e9cb55e3ae3ab415f591 3 00:15:49.205 21:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:15:49.205 21:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:49.205 21:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=e3ba531f7c0b884243f37a2676a366a05848b44286c2e9cb55e3ae3ab415f591 00:15:49.205 21:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:15:49.205 21:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:15:49.205 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.XMy 00:15:49.205 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.XMy 00:15:49.205 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.XMy 00:15:49.205 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:15:49.205 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:15:49.205 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:49.205 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:15:49.205 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:15:49.205 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:15:49.205 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:49.205 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=21e2baca8cf2ee7c723f384a868ed6c89c18bb16bef596c2 00:15:49.205 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:15:49.205 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.DKO 00:15:49.205 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 21e2baca8cf2ee7c723f384a868ed6c89c18bb16bef596c2 0 00:15:49.205 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 21e2baca8cf2ee7c723f384a868ed6c89c18bb16bef596c2 0 00:15:49.205 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:15:49.205 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:49.205 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=21e2baca8cf2ee7c723f384a868ed6c89c18bb16bef596c2 00:15:49.205 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:15:49.205 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:15:49.205 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.DKO 00:15:49.205 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.DKO 00:15:49.205 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.DKO 00:15:49.205 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:15:49.205 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:15:49.205 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:49.205 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:15:49.205 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:15:49.205 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:15:49.205 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:49.205 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=5dfc81e11d5d9bb21d436dc90077388fe159c66f534d6a59 00:15:49.205 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:15:49.205 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.gcB 00:15:49.205 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 5dfc81e11d5d9bb21d436dc90077388fe159c66f534d6a59 2 00:15:49.205 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 5dfc81e11d5d9bb21d436dc90077388fe159c66f534d6a59 2 00:15:49.205 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:15:49.205 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:49.205 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=5dfc81e11d5d9bb21d436dc90077388fe159c66f534d6a59 00:15:49.205 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:15:49.205 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:15:49.205 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.gcB 00:15:49.205 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.gcB 00:15:49.205 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.gcB 00:15:49.205 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:15:49.205 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:15:49.205 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:49.205 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:15:49.205 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:15:49.205 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:15:49.205 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:49.205 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=8d762a4ec3b58d149ff656826094b87d 00:15:49.205 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:15:49.205 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.WtH 00:15:49.205 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 8d762a4ec3b58d149ff656826094b87d 1 00:15:49.205 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 8d762a4ec3b58d149ff656826094b87d 1 00:15:49.205 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:15:49.205 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:49.205 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=8d762a4ec3b58d149ff656826094b87d 00:15:49.205 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:15:49.205 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:15:49.464 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.WtH 00:15:49.464 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.WtH 00:15:49.464 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.WtH 00:15:49.464 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:15:49.464 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:15:49.464 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:49.464 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:15:49.464 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:15:49.464 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:15:49.464 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:49.464 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=a2cbf2d7a4806ba5774362e90ce21152 00:15:49.464 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:15:49.464 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.EVD 00:15:49.464 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key a2cbf2d7a4806ba5774362e90ce21152 1 00:15:49.464 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 a2cbf2d7a4806ba5774362e90ce21152 1 00:15:49.464 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:15:49.464 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:49.464 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=a2cbf2d7a4806ba5774362e90ce21152 00:15:49.464 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:15:49.464 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:15:49.464 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.EVD 00:15:49.464 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.EVD 00:15:49.464 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.EVD 00:15:49.464 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:15:49.464 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:15:49.464 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:49.464 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:15:49.464 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:15:49.464 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:15:49.464 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:49.465 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=4093f3ca99181ddb144cf1f06c9e60eed7c1d3831ac62266 00:15:49.465 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:15:49.465 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.5Wq 00:15:49.465 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 4093f3ca99181ddb144cf1f06c9e60eed7c1d3831ac62266 2 00:15:49.465 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 4093f3ca99181ddb144cf1f06c9e60eed7c1d3831ac62266 2 00:15:49.465 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:15:49.465 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:49.465 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=4093f3ca99181ddb144cf1f06c9e60eed7c1d3831ac62266 00:15:49.465 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:15:49.465 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:15:49.465 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.5Wq 00:15:49.465 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.5Wq 00:15:49.465 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.5Wq 00:15:49.465 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:15:49.465 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:15:49.465 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:49.465 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:15:49.465 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:15:49.465 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:15:49.465 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:49.465 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=942ad6319396e3b4e40b0a5e5199f0dd 00:15:49.465 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:15:49.465 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.OmG 00:15:49.465 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 942ad6319396e3b4e40b0a5e5199f0dd 0 00:15:49.465 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 942ad6319396e3b4e40b0a5e5199f0dd 0 00:15:49.465 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:15:49.465 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:49.465 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=942ad6319396e3b4e40b0a5e5199f0dd 00:15:49.465 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:15:49.465 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:15:49.465 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.OmG 00:15:49.465 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.OmG 00:15:49.465 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.OmG 00:15:49.465 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:15:49.465 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:15:49.465 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:49.465 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:15:49.465 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:15:49.465 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:15:49.465 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:15:49.465 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=de2ade2dbeea0c94be6a3b6bdca0d8b0fbc65a69dc69a00c787905e67e78cdfb 00:15:49.465 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:15:49.465 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.DBv 00:15:49.465 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key de2ade2dbeea0c94be6a3b6bdca0d8b0fbc65a69dc69a00c787905e67e78cdfb 3 00:15:49.465 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 de2ade2dbeea0c94be6a3b6bdca0d8b0fbc65a69dc69a00c787905e67e78cdfb 3 00:15:49.465 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:15:49.465 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:49.465 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=de2ade2dbeea0c94be6a3b6bdca0d8b0fbc65a69dc69a00c787905e67e78cdfb 00:15:49.465 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:15:49.465 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:15:49.722 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.DBv 00:15:49.722 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.DBv 00:15:49.722 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.DBv 00:15:49.722 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:15:49.722 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 77522 00:15:49.722 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 77522 ']' 00:15:49.722 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:49.722 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:49.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:49.722 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:49.722 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:49.722 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:49.980 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:49.980 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:15:49.980 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:15:49.980 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.jzZ 00:15:49.980 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.980 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:49.980 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.980 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.XMy ]] 00:15:49.980 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.XMy 00:15:49.980 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.980 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:49.980 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.980 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:15:49.981 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.DKO 00:15:49.981 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.981 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:49.981 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.981 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.gcB ]] 00:15:49.981 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.gcB 00:15:49.981 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.981 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:49.981 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.981 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:15:49.981 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.WtH 00:15:49.981 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.981 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:49.981 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.981 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.EVD ]] 00:15:49.981 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.EVD 00:15:49.981 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.981 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:49.981 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.981 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:15:49.981 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.5Wq 00:15:49.981 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.981 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:49.981 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.981 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.OmG ]] 00:15:49.981 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.OmG 00:15:49.981 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.981 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:49.981 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.981 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:15:49.981 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.DBv 00:15:49.981 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.981 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:49.981 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.981 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:15:49.981 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:15:49.981 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:15:49.981 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:15:49.981 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:15:49.981 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:15:49.981 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:49.981 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:49.981 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:15:49.981 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:49.981 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:15:49.981 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:15:49.981 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:15:49.981 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:15:49.981 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:15:49.981 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:15:49.981 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:15:49.981 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:15:49.981 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:15:49.981 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:15:49.981 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:15:49.981 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:15:49.981 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:15:49.981 21:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:15:50.239 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:50.497 Waiting for block devices as requested 00:15:50.497 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:15:50.497 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:15:51.104 21:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:15:51.104 21:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:15:51.104 21:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:15:51.104 21:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:15:51.104 21:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:15:51.104 21:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:15:51.104 21:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:15:51.104 21:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:15:51.104 21:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:15:51.104 No valid GPT data, bailing 00:15:51.104 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:15:51.104 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:15:51.104 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:15:51.104 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:15:51.104 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:15:51.104 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:15:51.104 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:15:51.104 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:15:51.104 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:15:51.104 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:15:51.104 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:15:51.104 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:15:51.104 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:15:51.372 No valid GPT data, bailing 00:15:51.372 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:15:51.372 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:15:51.372 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:15:51.372 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:15:51.372 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:15:51.372 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:15:51.372 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:15:51.372 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:15:51.372 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:15:51.372 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:15:51.372 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:15:51.372 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:15:51.372 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:15:51.372 No valid GPT data, bailing 00:15:51.372 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:15:51.372 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:15:51.372 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:15:51.372 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:15:51.372 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:15:51.372 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:15:51.372 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:15:51.372 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:15:51.372 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:15:51.372 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:15:51.372 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:15:51.372 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:15:51.372 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:15:51.372 No valid GPT data, bailing 00:15:51.372 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:15:51.372 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:15:51.372 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:15:51.372 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:15:51.372 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:15:51.372 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:15:51.372 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:15:51.372 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:15:51.372 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:15:51.372 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:15:51.372 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:15:51.372 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:15:51.372 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:15:51.372 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:15:51.372 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:15:51.372 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:15:51.372 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:15:51.372 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --hostid=987211d5-ddc7-4d0a-8ba2-cf43288d1158 -a 10.0.0.1 -t tcp -s 4420 00:15:51.372 00:15:51.372 Discovery Log Number of Records 2, Generation counter 2 00:15:51.372 =====Discovery Log Entry 0====== 00:15:51.372 trtype: tcp 00:15:51.372 adrfam: ipv4 00:15:51.372 subtype: current discovery subsystem 00:15:51.372 treq: not specified, sq flow control disable supported 00:15:51.373 portid: 1 00:15:51.373 trsvcid: 4420 00:15:51.373 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:15:51.373 traddr: 10.0.0.1 00:15:51.373 eflags: none 00:15:51.373 sectype: none 00:15:51.373 =====Discovery Log Entry 1====== 00:15:51.373 trtype: tcp 00:15:51.373 adrfam: ipv4 00:15:51.373 subtype: nvme subsystem 00:15:51.373 treq: not specified, sq flow control disable supported 00:15:51.373 portid: 1 00:15:51.373 trsvcid: 4420 00:15:51.373 subnqn: nqn.2024-02.io.spdk:cnode0 00:15:51.373 traddr: 10.0.0.1 00:15:51.373 eflags: none 00:15:51.373 sectype: none 00:15:51.647 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:15:51.647 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:15:51.647 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:15:51.647 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:15:51.647 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:51.647 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:51.647 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:15:51.647 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:15:51.647 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjFlMmJhY2E4Y2YyZWU3YzcyM2YzODRhODY4ZWQ2Yzg5YzE4YmIxNmJlZjU5NmMyGXebhg==: 00:15:51.647 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWRmYzgxZTExZDVkOWJiMjFkNDM2ZGM5MDA3NzM4OGZlMTU5YzY2ZjUzNGQ2YTU5Q3oTbw==: 00:15:51.647 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:51.647 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:15:51.647 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjFlMmJhY2E4Y2YyZWU3YzcyM2YzODRhODY4ZWQ2Yzg5YzE4YmIxNmJlZjU5NmMyGXebhg==: 00:15:51.647 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWRmYzgxZTExZDVkOWJiMjFkNDM2ZGM5MDA3NzM4OGZlMTU5YzY2ZjUzNGQ2YTU5Q3oTbw==: ]] 00:15:51.647 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWRmYzgxZTExZDVkOWJiMjFkNDM2ZGM5MDA3NzM4OGZlMTU5YzY2ZjUzNGQ2YTU5Q3oTbw==: 00:15:51.647 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:15:51.647 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:15:51.647 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:15:51.647 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:51.647 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:15:51.647 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:51.647 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:15:51.647 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:51.647 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:15:51.647 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:51.647 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:51.647 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.647 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:51.647 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.647 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:51.647 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:15:51.647 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:15:51.647 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:15:51.647 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:51.647 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:51.647 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:15:51.647 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:51.647 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:15:51.647 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:15:51.647 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:15:51.647 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:51.647 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.647 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:51.647 nvme0n1 00:15:51.647 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.647 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:51.647 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:51.647 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.647 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:51.647 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.906 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:51.906 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:51.906 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.906 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:51.906 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.906 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:15:51.906 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:15:51.906 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:51.906 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:15:51.906 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:51.906 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:51.906 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:15:51.906 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:15:51.906 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjA1ZTVkYjA4YWI0MDZhYjcwZTcwYTdkYzVmOTE5YjUKpbVF: 00:15:51.906 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTNiYTUzMWY3YzBiODg0MjQzZjM3YTI2NzZhMzY2YTA1ODQ4YjQ0Mjg2YzJlOWNiNTVlM2FlM2FiNDE1ZjU5MXk4f1w=: 00:15:51.906 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:51.906 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:15:51.906 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjA1ZTVkYjA4YWI0MDZhYjcwZTcwYTdkYzVmOTE5YjUKpbVF: 00:15:51.906 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTNiYTUzMWY3YzBiODg0MjQzZjM3YTI2NzZhMzY2YTA1ODQ4YjQ0Mjg2YzJlOWNiNTVlM2FlM2FiNDE1ZjU5MXk4f1w=: ]] 00:15:51.906 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTNiYTUzMWY3YzBiODg0MjQzZjM3YTI2NzZhMzY2YTA1ODQ4YjQ0Mjg2YzJlOWNiNTVlM2FlM2FiNDE1ZjU5MXk4f1w=: 00:15:51.906 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:15:51.906 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:51.906 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:51.906 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:15:51.906 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:15:51.906 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:51.906 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:51.906 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.906 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:51.906 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.906 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:51.906 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:15:51.906 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:15:51.906 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:15:51.906 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:51.906 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:51.906 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:15:51.906 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:51.907 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:15:51.907 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:15:51.907 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:15:51.907 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:51.907 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.907 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:51.907 nvme0n1 00:15:51.907 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.907 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:51.907 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:51.907 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.907 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:51.907 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.907 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:51.907 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:51.907 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.907 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:51.907 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.907 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:51.907 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:15:51.907 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:51.907 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:51.907 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:15:51.907 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:15:51.907 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjFlMmJhY2E4Y2YyZWU3YzcyM2YzODRhODY4ZWQ2Yzg5YzE4YmIxNmJlZjU5NmMyGXebhg==: 00:15:51.907 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWRmYzgxZTExZDVkOWJiMjFkNDM2ZGM5MDA3NzM4OGZlMTU5YzY2ZjUzNGQ2YTU5Q3oTbw==: 00:15:51.907 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:51.907 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:15:51.907 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjFlMmJhY2E4Y2YyZWU3YzcyM2YzODRhODY4ZWQ2Yzg5YzE4YmIxNmJlZjU5NmMyGXebhg==: 00:15:51.907 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWRmYzgxZTExZDVkOWJiMjFkNDM2ZGM5MDA3NzM4OGZlMTU5YzY2ZjUzNGQ2YTU5Q3oTbw==: ]] 00:15:51.907 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWRmYzgxZTExZDVkOWJiMjFkNDM2ZGM5MDA3NzM4OGZlMTU5YzY2ZjUzNGQ2YTU5Q3oTbw==: 00:15:51.907 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:15:51.907 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:51.907 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:51.907 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:15:51.907 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:15:51.907 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:51.907 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:51.907 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.907 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:51.907 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.907 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:51.907 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:15:51.907 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:15:51.907 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:15:51.907 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:51.907 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:51.907 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:15:51.907 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:51.907 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:15:51.907 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:15:51.907 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:15:51.907 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:51.907 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.907 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:52.165 nvme0n1 00:15:52.165 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.165 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:52.165 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.165 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:52.165 21:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:52.165 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.165 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:52.165 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:52.165 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.165 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:52.165 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.165 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:52.165 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:15:52.165 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:52.165 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:52.165 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:15:52.165 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:15:52.165 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGQ3NjJhNGVjM2I1OGQxNDlmZjY1NjgyNjA5NGI4N2QrqGKh: 00:15:52.165 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTJjYmYyZDdhNDgwNmJhNTc3NDM2MmU5MGNlMjExNTL7ioPC: 00:15:52.165 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:52.165 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:15:52.166 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGQ3NjJhNGVjM2I1OGQxNDlmZjY1NjgyNjA5NGI4N2QrqGKh: 00:15:52.166 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTJjYmYyZDdhNDgwNmJhNTc3NDM2MmU5MGNlMjExNTL7ioPC: ]] 00:15:52.166 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTJjYmYyZDdhNDgwNmJhNTc3NDM2MmU5MGNlMjExNTL7ioPC: 00:15:52.166 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:15:52.166 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:52.166 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:52.166 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:15:52.166 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:15:52.166 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:52.166 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:52.166 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.166 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:52.166 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.166 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:52.166 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:15:52.166 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:15:52.166 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:15:52.166 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:52.166 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:52.166 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:15:52.166 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:52.166 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:15:52.166 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:15:52.166 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:15:52.166 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:52.166 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.166 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:52.424 nvme0n1 00:15:52.424 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.424 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:52.424 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.424 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:52.424 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:52.424 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.424 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:52.424 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:52.424 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.424 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:52.424 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.424 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:52.424 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:15:52.424 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:52.424 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:52.424 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:15:52.424 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:15:52.424 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDA5M2YzY2E5OTE4MWRkYjE0NGNmMWYwNmM5ZTYwZWVkN2MxZDM4MzFhYzYyMjY2kopZJA==: 00:15:52.424 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTQyYWQ2MzE5Mzk2ZTNiNGU0MGIwYTVlNTE5OWYwZGRSoKyV: 00:15:52.424 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:52.424 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:15:52.424 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDA5M2YzY2E5OTE4MWRkYjE0NGNmMWYwNmM5ZTYwZWVkN2MxZDM4MzFhYzYyMjY2kopZJA==: 00:15:52.424 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTQyYWQ2MzE5Mzk2ZTNiNGU0MGIwYTVlNTE5OWYwZGRSoKyV: ]] 00:15:52.424 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTQyYWQ2MzE5Mzk2ZTNiNGU0MGIwYTVlNTE5OWYwZGRSoKyV: 00:15:52.424 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:15:52.424 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:52.424 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:52.424 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:15:52.424 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:15:52.424 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:52.424 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:52.424 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.424 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:52.424 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.424 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:52.424 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:15:52.424 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:15:52.424 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:15:52.424 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:52.424 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:52.424 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:15:52.424 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:52.424 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:15:52.424 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:15:52.424 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:15:52.424 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:15:52.424 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.424 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:52.424 nvme0n1 00:15:52.424 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.424 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:52.424 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:52.424 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.424 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:52.424 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.424 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:52.424 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:52.424 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.424 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:52.683 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.683 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:52.683 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:15:52.683 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:52.683 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:52.683 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:15:52.683 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:15:52.683 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGUyYWRlMmRiZWVhMGM5NGJlNmEzYjZiZGNhMGQ4YjBmYmM2NWE2OWRjNjlhMDBjNzg3OTA1ZTY3ZTc4Y2RmYjiCprw=: 00:15:52.683 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:15:52.683 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:52.683 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:15:52.683 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGUyYWRlMmRiZWVhMGM5NGJlNmEzYjZiZGNhMGQ4YjBmYmM2NWE2OWRjNjlhMDBjNzg3OTA1ZTY3ZTc4Y2RmYjiCprw=: 00:15:52.683 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:15:52.683 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:15:52.683 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:52.683 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:52.683 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:15:52.683 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:15:52.683 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:52.683 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:52.683 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.683 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:52.683 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.683 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:52.683 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:15:52.683 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:15:52.683 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:15:52.683 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:52.683 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:52.683 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:15:52.683 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:52.683 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:15:52.683 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:15:52.683 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:15:52.683 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:15:52.683 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.683 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:52.683 nvme0n1 00:15:52.683 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.683 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:52.684 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.684 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:52.684 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:52.684 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.684 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:52.684 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:52.684 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.684 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:52.684 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.684 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:15:52.684 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:52.684 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:15:52.684 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:52.684 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:52.684 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:15:52.684 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:15:52.684 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjA1ZTVkYjA4YWI0MDZhYjcwZTcwYTdkYzVmOTE5YjUKpbVF: 00:15:52.684 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTNiYTUzMWY3YzBiODg0MjQzZjM3YTI2NzZhMzY2YTA1ODQ4YjQ0Mjg2YzJlOWNiNTVlM2FlM2FiNDE1ZjU5MXk4f1w=: 00:15:52.684 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:52.684 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:15:52.943 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjA1ZTVkYjA4YWI0MDZhYjcwZTcwYTdkYzVmOTE5YjUKpbVF: 00:15:52.943 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTNiYTUzMWY3YzBiODg0MjQzZjM3YTI2NzZhMzY2YTA1ODQ4YjQ0Mjg2YzJlOWNiNTVlM2FlM2FiNDE1ZjU5MXk4f1w=: ]] 00:15:52.943 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTNiYTUzMWY3YzBiODg0MjQzZjM3YTI2NzZhMzY2YTA1ODQ4YjQ0Mjg2YzJlOWNiNTVlM2FlM2FiNDE1ZjU5MXk4f1w=: 00:15:52.943 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:15:52.943 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:52.943 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:52.943 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:15:52.943 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:15:52.943 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:52.943 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:52.943 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.943 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:52.943 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.943 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:52.943 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:15:52.943 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:15:52.943 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:15:52.943 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:52.943 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:52.943 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:15:52.943 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:52.943 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:15:52.943 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:15:52.943 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:15:53.201 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:53.201 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.201 21:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:53.201 nvme0n1 00:15:53.201 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.201 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:53.201 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.201 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:53.201 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:53.201 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.201 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:53.201 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:53.201 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.201 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:53.202 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.202 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:53.202 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:15:53.202 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:53.202 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:53.202 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:15:53.202 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:15:53.202 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjFlMmJhY2E4Y2YyZWU3YzcyM2YzODRhODY4ZWQ2Yzg5YzE4YmIxNmJlZjU5NmMyGXebhg==: 00:15:53.202 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWRmYzgxZTExZDVkOWJiMjFkNDM2ZGM5MDA3NzM4OGZlMTU5YzY2ZjUzNGQ2YTU5Q3oTbw==: 00:15:53.202 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:53.202 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:15:53.202 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjFlMmJhY2E4Y2YyZWU3YzcyM2YzODRhODY4ZWQ2Yzg5YzE4YmIxNmJlZjU5NmMyGXebhg==: 00:15:53.202 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWRmYzgxZTExZDVkOWJiMjFkNDM2ZGM5MDA3NzM4OGZlMTU5YzY2ZjUzNGQ2YTU5Q3oTbw==: ]] 00:15:53.202 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWRmYzgxZTExZDVkOWJiMjFkNDM2ZGM5MDA3NzM4OGZlMTU5YzY2ZjUzNGQ2YTU5Q3oTbw==: 00:15:53.202 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:15:53.202 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:53.202 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:53.202 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:15:53.202 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:15:53.202 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:53.202 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:53.202 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.202 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:53.202 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.202 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:53.202 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:15:53.202 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:15:53.202 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:15:53.202 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:53.202 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:53.202 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:15:53.202 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:53.202 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:15:53.202 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:15:53.202 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:15:53.202 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:53.202 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.202 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:53.460 nvme0n1 00:15:53.460 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.460 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:53.460 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.460 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:53.460 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:53.460 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.460 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:53.460 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:53.460 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.460 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:53.460 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.460 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:53.460 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:15:53.460 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:53.460 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:53.460 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:15:53.460 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:15:53.460 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGQ3NjJhNGVjM2I1OGQxNDlmZjY1NjgyNjA5NGI4N2QrqGKh: 00:15:53.460 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTJjYmYyZDdhNDgwNmJhNTc3NDM2MmU5MGNlMjExNTL7ioPC: 00:15:53.460 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:53.460 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:15:53.460 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGQ3NjJhNGVjM2I1OGQxNDlmZjY1NjgyNjA5NGI4N2QrqGKh: 00:15:53.460 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTJjYmYyZDdhNDgwNmJhNTc3NDM2MmU5MGNlMjExNTL7ioPC: ]] 00:15:53.460 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTJjYmYyZDdhNDgwNmJhNTc3NDM2MmU5MGNlMjExNTL7ioPC: 00:15:53.460 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:15:53.460 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:53.460 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:53.460 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:15:53.460 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:15:53.460 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:53.460 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:53.460 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.460 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:53.460 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.460 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:53.460 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:15:53.460 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:15:53.460 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:15:53.460 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:53.460 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:53.460 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:15:53.460 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:53.460 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:15:53.460 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:15:53.460 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:15:53.460 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:53.460 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.460 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:53.717 nvme0n1 00:15:53.717 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.717 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:53.717 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.717 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:53.717 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:53.717 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.717 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:53.717 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:53.717 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.717 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:53.717 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.717 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:53.717 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:15:53.717 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:53.717 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:53.717 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:15:53.717 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:15:53.717 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDA5M2YzY2E5OTE4MWRkYjE0NGNmMWYwNmM5ZTYwZWVkN2MxZDM4MzFhYzYyMjY2kopZJA==: 00:15:53.717 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTQyYWQ2MzE5Mzk2ZTNiNGU0MGIwYTVlNTE5OWYwZGRSoKyV: 00:15:53.717 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:53.717 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:15:53.717 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDA5M2YzY2E5OTE4MWRkYjE0NGNmMWYwNmM5ZTYwZWVkN2MxZDM4MzFhYzYyMjY2kopZJA==: 00:15:53.717 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTQyYWQ2MzE5Mzk2ZTNiNGU0MGIwYTVlNTE5OWYwZGRSoKyV: ]] 00:15:53.717 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTQyYWQ2MzE5Mzk2ZTNiNGU0MGIwYTVlNTE5OWYwZGRSoKyV: 00:15:53.717 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:15:53.717 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:53.717 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:53.717 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:15:53.717 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:15:53.717 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:53.717 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:53.717 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.717 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:53.717 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.717 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:53.717 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:15:53.717 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:15:53.717 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:15:53.717 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:53.717 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:53.717 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:15:53.717 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:53.717 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:15:53.717 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:15:53.717 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:15:53.717 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:15:53.717 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.717 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:53.717 nvme0n1 00:15:53.717 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.717 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:53.717 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:53.717 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.717 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:53.717 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.975 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:53.975 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:53.975 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.975 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:53.975 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.975 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:53.975 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:15:53.975 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:53.975 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:53.975 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:15:53.975 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:15:53.975 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGUyYWRlMmRiZWVhMGM5NGJlNmEzYjZiZGNhMGQ4YjBmYmM2NWE2OWRjNjlhMDBjNzg3OTA1ZTY3ZTc4Y2RmYjiCprw=: 00:15:53.975 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:15:53.975 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:53.975 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:15:53.975 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGUyYWRlMmRiZWVhMGM5NGJlNmEzYjZiZGNhMGQ4YjBmYmM2NWE2OWRjNjlhMDBjNzg3OTA1ZTY3ZTc4Y2RmYjiCprw=: 00:15:53.975 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:15:53.975 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:15:53.975 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:53.975 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:53.975 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:15:53.975 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:15:53.975 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:53.975 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:53.975 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.975 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:53.975 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.975 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:53.975 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:15:53.975 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:15:53.975 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:15:53.975 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:53.975 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:53.975 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:15:53.975 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:53.975 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:15:53.975 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:15:53.975 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:15:53.975 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:15:53.975 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.975 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:53.975 nvme0n1 00:15:53.975 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.975 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:53.975 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.975 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:53.975 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:53.975 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.975 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:53.975 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:53.975 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.975 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:53.975 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.975 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:15:53.975 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:53.975 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:15:53.975 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:53.975 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:53.975 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:15:53.975 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:15:53.975 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjA1ZTVkYjA4YWI0MDZhYjcwZTcwYTdkYzVmOTE5YjUKpbVF: 00:15:53.975 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTNiYTUzMWY3YzBiODg0MjQzZjM3YTI2NzZhMzY2YTA1ODQ4YjQ0Mjg2YzJlOWNiNTVlM2FlM2FiNDE1ZjU5MXk4f1w=: 00:15:53.975 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:53.975 21:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:15:54.911 21:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjA1ZTVkYjA4YWI0MDZhYjcwZTcwYTdkYzVmOTE5YjUKpbVF: 00:15:54.911 21:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTNiYTUzMWY3YzBiODg0MjQzZjM3YTI2NzZhMzY2YTA1ODQ4YjQ0Mjg2YzJlOWNiNTVlM2FlM2FiNDE1ZjU5MXk4f1w=: ]] 00:15:54.911 21:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTNiYTUzMWY3YzBiODg0MjQzZjM3YTI2NzZhMzY2YTA1ODQ4YjQ0Mjg2YzJlOWNiNTVlM2FlM2FiNDE1ZjU5MXk4f1w=: 00:15:54.911 21:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:15:54.911 21:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:54.911 21:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:54.911 21:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:15:54.911 21:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:15:54.911 21:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:54.911 21:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:54.911 21:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.911 21:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:54.911 21:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.911 21:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:54.911 21:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:15:54.911 21:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:15:54.911 21:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:15:54.911 21:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:54.911 21:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:54.911 21:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:15:54.911 21:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:54.911 21:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:15:54.911 21:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:15:54.911 21:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:15:54.911 21:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:54.911 21:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.911 21:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:54.911 nvme0n1 00:15:54.911 21:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.911 21:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:54.911 21:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.911 21:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:54.911 21:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:54.911 21:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.911 21:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:54.911 21:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:54.911 21:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.911 21:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:54.911 21:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.911 21:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:54.911 21:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:15:54.911 21:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:54.911 21:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:54.911 21:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:15:54.911 21:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:15:54.911 21:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjFlMmJhY2E4Y2YyZWU3YzcyM2YzODRhODY4ZWQ2Yzg5YzE4YmIxNmJlZjU5NmMyGXebhg==: 00:15:54.911 21:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWRmYzgxZTExZDVkOWJiMjFkNDM2ZGM5MDA3NzM4OGZlMTU5YzY2ZjUzNGQ2YTU5Q3oTbw==: 00:15:54.911 21:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:54.911 21:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:15:54.911 21:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjFlMmJhY2E4Y2YyZWU3YzcyM2YzODRhODY4ZWQ2Yzg5YzE4YmIxNmJlZjU5NmMyGXebhg==: 00:15:54.911 21:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWRmYzgxZTExZDVkOWJiMjFkNDM2ZGM5MDA3NzM4OGZlMTU5YzY2ZjUzNGQ2YTU5Q3oTbw==: ]] 00:15:54.911 21:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWRmYzgxZTExZDVkOWJiMjFkNDM2ZGM5MDA3NzM4OGZlMTU5YzY2ZjUzNGQ2YTU5Q3oTbw==: 00:15:54.911 21:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:15:54.911 21:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:54.911 21:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:54.911 21:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:15:54.911 21:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:15:54.911 21:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:54.911 21:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:54.911 21:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.911 21:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:54.911 21:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.911 21:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:54.911 21:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:15:54.911 21:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:15:54.911 21:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:15:54.911 21:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:54.911 21:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:54.911 21:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:15:54.911 21:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:54.911 21:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:15:54.911 21:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:15:54.911 21:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:15:54.911 21:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:54.911 21:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.911 21:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:55.170 nvme0n1 00:15:55.170 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.170 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:55.170 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:55.170 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.170 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:55.170 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.170 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:55.170 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:55.170 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.170 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:55.170 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.170 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:55.170 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:15:55.170 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:55.170 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:55.170 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:15:55.170 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:15:55.170 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGQ3NjJhNGVjM2I1OGQxNDlmZjY1NjgyNjA5NGI4N2QrqGKh: 00:15:55.170 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTJjYmYyZDdhNDgwNmJhNTc3NDM2MmU5MGNlMjExNTL7ioPC: 00:15:55.170 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:55.170 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:15:55.170 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGQ3NjJhNGVjM2I1OGQxNDlmZjY1NjgyNjA5NGI4N2QrqGKh: 00:15:55.170 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTJjYmYyZDdhNDgwNmJhNTc3NDM2MmU5MGNlMjExNTL7ioPC: ]] 00:15:55.170 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTJjYmYyZDdhNDgwNmJhNTc3NDM2MmU5MGNlMjExNTL7ioPC: 00:15:55.170 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:15:55.170 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:55.170 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:55.170 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:15:55.170 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:15:55.170 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:55.170 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:55.170 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.170 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:55.170 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.429 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:55.429 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:15:55.429 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:15:55.429 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:15:55.429 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:55.429 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:55.429 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:15:55.429 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:55.429 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:15:55.429 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:15:55.429 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:15:55.429 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:55.429 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.429 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:55.429 nvme0n1 00:15:55.429 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.429 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:55.429 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:55.429 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.429 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:55.429 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.429 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:55.429 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:55.429 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.429 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:55.687 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.687 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:55.687 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:15:55.687 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:55.687 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:55.688 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:15:55.688 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:15:55.688 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDA5M2YzY2E5OTE4MWRkYjE0NGNmMWYwNmM5ZTYwZWVkN2MxZDM4MzFhYzYyMjY2kopZJA==: 00:15:55.688 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTQyYWQ2MzE5Mzk2ZTNiNGU0MGIwYTVlNTE5OWYwZGRSoKyV: 00:15:55.688 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:55.688 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:15:55.688 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDA5M2YzY2E5OTE4MWRkYjE0NGNmMWYwNmM5ZTYwZWVkN2MxZDM4MzFhYzYyMjY2kopZJA==: 00:15:55.688 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTQyYWQ2MzE5Mzk2ZTNiNGU0MGIwYTVlNTE5OWYwZGRSoKyV: ]] 00:15:55.688 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTQyYWQ2MzE5Mzk2ZTNiNGU0MGIwYTVlNTE5OWYwZGRSoKyV: 00:15:55.688 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:15:55.688 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:55.688 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:55.688 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:15:55.688 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:15:55.688 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:55.688 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:55.688 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.688 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:55.688 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.688 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:55.688 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:15:55.688 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:15:55.688 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:15:55.688 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:55.688 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:55.688 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:15:55.688 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:55.688 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:15:55.688 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:15:55.688 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:15:55.688 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:15:55.688 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.688 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:55.688 nvme0n1 00:15:55.688 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.688 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:55.688 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.688 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:55.688 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:55.688 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.948 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:55.948 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:55.948 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.948 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:55.948 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.948 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:55.948 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:15:55.948 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:55.948 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:55.948 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:15:55.948 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:15:55.948 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGUyYWRlMmRiZWVhMGM5NGJlNmEzYjZiZGNhMGQ4YjBmYmM2NWE2OWRjNjlhMDBjNzg3OTA1ZTY3ZTc4Y2RmYjiCprw=: 00:15:55.948 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:15:55.948 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:55.948 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:15:55.948 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGUyYWRlMmRiZWVhMGM5NGJlNmEzYjZiZGNhMGQ4YjBmYmM2NWE2OWRjNjlhMDBjNzg3OTA1ZTY3ZTc4Y2RmYjiCprw=: 00:15:55.948 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:15:55.948 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:15:55.948 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:55.948 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:55.948 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:15:55.948 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:15:55.948 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:55.948 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:55.948 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.948 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:55.948 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.948 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:55.948 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:15:55.948 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:15:55.948 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:15:55.948 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:55.948 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:55.948 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:15:55.948 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:55.948 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:15:55.948 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:15:55.948 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:15:55.948 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:15:55.948 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.948 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:55.948 nvme0n1 00:15:55.948 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.948 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:55.948 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.948 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:55.948 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:55.948 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.206 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:56.206 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:56.206 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.206 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:56.206 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.206 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:15:56.206 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:56.206 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:15:56.206 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:56.206 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:56.206 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:15:56.206 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:15:56.206 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjA1ZTVkYjA4YWI0MDZhYjcwZTcwYTdkYzVmOTE5YjUKpbVF: 00:15:56.206 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTNiYTUzMWY3YzBiODg0MjQzZjM3YTI2NzZhMzY2YTA1ODQ4YjQ0Mjg2YzJlOWNiNTVlM2FlM2FiNDE1ZjU5MXk4f1w=: 00:15:56.206 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:56.206 21:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:15:58.106 21:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjA1ZTVkYjA4YWI0MDZhYjcwZTcwYTdkYzVmOTE5YjUKpbVF: 00:15:58.106 21:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTNiYTUzMWY3YzBiODg0MjQzZjM3YTI2NzZhMzY2YTA1ODQ4YjQ0Mjg2YzJlOWNiNTVlM2FlM2FiNDE1ZjU5MXk4f1w=: ]] 00:15:58.106 21:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTNiYTUzMWY3YzBiODg0MjQzZjM3YTI2NzZhMzY2YTA1ODQ4YjQ0Mjg2YzJlOWNiNTVlM2FlM2FiNDE1ZjU5MXk4f1w=: 00:15:58.107 21:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:15:58.107 21:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:58.107 21:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:58.107 21:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:15:58.107 21:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:15:58.107 21:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:58.107 21:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:58.107 21:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.107 21:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:58.107 21:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.107 21:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:58.107 21:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:15:58.107 21:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:15:58.107 21:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:15:58.107 21:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:58.107 21:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:58.107 21:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:15:58.107 21:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:58.107 21:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:15:58.107 21:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:15:58.107 21:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:15:58.107 21:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:58.107 21:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.107 21:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:58.365 nvme0n1 00:15:58.365 21:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.365 21:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:58.365 21:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:58.365 21:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.365 21:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:58.365 21:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.365 21:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:58.365 21:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:58.365 21:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.365 21:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:58.365 21:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.365 21:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:58.365 21:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:15:58.365 21:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:58.365 21:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:58.365 21:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:15:58.365 21:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:15:58.365 21:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjFlMmJhY2E4Y2YyZWU3YzcyM2YzODRhODY4ZWQ2Yzg5YzE4YmIxNmJlZjU5NmMyGXebhg==: 00:15:58.365 21:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWRmYzgxZTExZDVkOWJiMjFkNDM2ZGM5MDA3NzM4OGZlMTU5YzY2ZjUzNGQ2YTU5Q3oTbw==: 00:15:58.365 21:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:58.365 21:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:15:58.365 21:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjFlMmJhY2E4Y2YyZWU3YzcyM2YzODRhODY4ZWQ2Yzg5YzE4YmIxNmJlZjU5NmMyGXebhg==: 00:15:58.365 21:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWRmYzgxZTExZDVkOWJiMjFkNDM2ZGM5MDA3NzM4OGZlMTU5YzY2ZjUzNGQ2YTU5Q3oTbw==: ]] 00:15:58.365 21:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWRmYzgxZTExZDVkOWJiMjFkNDM2ZGM5MDA3NzM4OGZlMTU5YzY2ZjUzNGQ2YTU5Q3oTbw==: 00:15:58.365 21:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:15:58.365 21:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:58.365 21:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:58.365 21:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:15:58.365 21:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:15:58.365 21:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:58.365 21:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:58.365 21:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.365 21:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:58.365 21:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.365 21:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:58.365 21:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:15:58.365 21:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:15:58.365 21:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:15:58.365 21:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:58.365 21:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:58.365 21:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:15:58.365 21:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:58.365 21:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:15:58.365 21:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:15:58.365 21:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:15:58.365 21:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:58.365 21:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.365 21:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:58.624 nvme0n1 00:15:58.624 21:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.624 21:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:58.624 21:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.624 21:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:58.624 21:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:58.624 21:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.624 21:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:58.624 21:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:58.624 21:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.624 21:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:58.624 21:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.624 21:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:58.624 21:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:15:58.624 21:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:58.624 21:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:58.624 21:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:15:58.624 21:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:15:58.624 21:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGQ3NjJhNGVjM2I1OGQxNDlmZjY1NjgyNjA5NGI4N2QrqGKh: 00:15:58.624 21:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTJjYmYyZDdhNDgwNmJhNTc3NDM2MmU5MGNlMjExNTL7ioPC: 00:15:58.624 21:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:58.624 21:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:15:58.624 21:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGQ3NjJhNGVjM2I1OGQxNDlmZjY1NjgyNjA5NGI4N2QrqGKh: 00:15:58.624 21:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTJjYmYyZDdhNDgwNmJhNTc3NDM2MmU5MGNlMjExNTL7ioPC: ]] 00:15:58.624 21:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTJjYmYyZDdhNDgwNmJhNTc3NDM2MmU5MGNlMjExNTL7ioPC: 00:15:58.624 21:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:15:58.624 21:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:58.624 21:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:58.624 21:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:15:58.624 21:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:15:58.624 21:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:58.624 21:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:58.624 21:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.624 21:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:58.882 21:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.882 21:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:58.882 21:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:15:58.882 21:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:15:58.882 21:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:15:58.882 21:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:58.882 21:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:58.882 21:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:15:58.882 21:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:58.882 21:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:15:58.882 21:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:15:58.882 21:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:15:58.882 21:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:58.882 21:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.882 21:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:59.196 nvme0n1 00:15:59.196 21:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.196 21:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:59.196 21:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:59.196 21:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.196 21:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:59.196 21:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.196 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:59.196 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:59.196 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.196 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:59.196 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.196 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:59.196 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:15:59.196 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:59.196 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:59.196 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:15:59.196 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:15:59.196 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDA5M2YzY2E5OTE4MWRkYjE0NGNmMWYwNmM5ZTYwZWVkN2MxZDM4MzFhYzYyMjY2kopZJA==: 00:15:59.196 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTQyYWQ2MzE5Mzk2ZTNiNGU0MGIwYTVlNTE5OWYwZGRSoKyV: 00:15:59.196 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:59.196 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:15:59.196 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDA5M2YzY2E5OTE4MWRkYjE0NGNmMWYwNmM5ZTYwZWVkN2MxZDM4MzFhYzYyMjY2kopZJA==: 00:15:59.196 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTQyYWQ2MzE5Mzk2ZTNiNGU0MGIwYTVlNTE5OWYwZGRSoKyV: ]] 00:15:59.196 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTQyYWQ2MzE5Mzk2ZTNiNGU0MGIwYTVlNTE5OWYwZGRSoKyV: 00:15:59.196 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:15:59.196 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:59.196 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:59.196 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:15:59.196 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:15:59.196 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:59.196 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:59.196 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.196 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:59.196 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.196 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:59.197 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:15:59.197 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:15:59.197 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:15:59.197 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:59.197 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:59.197 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:15:59.197 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:59.197 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:15:59.197 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:15:59.197 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:15:59.197 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:15:59.197 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.197 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:59.455 nvme0n1 00:15:59.455 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.455 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:59.455 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:59.455 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.455 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:59.455 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.455 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:59.455 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:59.455 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.455 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:59.714 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.714 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:59.714 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:15:59.714 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:59.714 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:59.714 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:15:59.714 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:15:59.714 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGUyYWRlMmRiZWVhMGM5NGJlNmEzYjZiZGNhMGQ4YjBmYmM2NWE2OWRjNjlhMDBjNzg3OTA1ZTY3ZTc4Y2RmYjiCprw=: 00:15:59.714 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:15:59.714 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:59.714 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:15:59.714 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGUyYWRlMmRiZWVhMGM5NGJlNmEzYjZiZGNhMGQ4YjBmYmM2NWE2OWRjNjlhMDBjNzg3OTA1ZTY3ZTc4Y2RmYjiCprw=: 00:15:59.714 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:15:59.714 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:15:59.714 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:59.714 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:59.714 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:15:59.714 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:15:59.714 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:59.714 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:59.714 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.714 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:59.714 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.714 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:59.714 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:15:59.714 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:15:59.714 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:15:59.714 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:59.714 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:59.714 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:15:59.714 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:59.714 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:15:59.714 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:15:59.714 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:15:59.714 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:15:59.714 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.714 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:59.972 nvme0n1 00:15:59.972 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.972 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:59.972 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:59.972 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.972 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:59.972 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.972 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:59.972 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:59.972 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.972 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:59.972 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.972 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:15:59.972 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:59.972 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:15:59.972 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:59.972 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:59.972 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:15:59.972 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:15:59.972 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjA1ZTVkYjA4YWI0MDZhYjcwZTcwYTdkYzVmOTE5YjUKpbVF: 00:15:59.972 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTNiYTUzMWY3YzBiODg0MjQzZjM3YTI2NzZhMzY2YTA1ODQ4YjQ0Mjg2YzJlOWNiNTVlM2FlM2FiNDE1ZjU5MXk4f1w=: 00:15:59.972 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:59.972 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:15:59.972 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjA1ZTVkYjA4YWI0MDZhYjcwZTcwYTdkYzVmOTE5YjUKpbVF: 00:15:59.973 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTNiYTUzMWY3YzBiODg0MjQzZjM3YTI2NzZhMzY2YTA1ODQ4YjQ0Mjg2YzJlOWNiNTVlM2FlM2FiNDE1ZjU5MXk4f1w=: ]] 00:15:59.973 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTNiYTUzMWY3YzBiODg0MjQzZjM3YTI2NzZhMzY2YTA1ODQ4YjQ0Mjg2YzJlOWNiNTVlM2FlM2FiNDE1ZjU5MXk4f1w=: 00:15:59.973 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:15:59.973 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:59.973 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:59.973 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:15:59.973 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:15:59.973 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:59.973 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:59.973 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.973 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:59.973 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.973 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:59.973 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:15:59.973 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:15:59.973 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:15:59.973 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:59.973 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:59.973 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:15:59.973 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:59.973 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:15:59.973 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:15:59.973 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:15:59.973 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:59.973 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.973 21:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:00.538 nvme0n1 00:16:00.539 21:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.539 21:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:00.539 21:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:00.539 21:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.539 21:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:00.539 21:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.797 21:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:00.797 21:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:00.797 21:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.797 21:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:00.797 21:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.797 21:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:00.797 21:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:16:00.797 21:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:00.797 21:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:00.797 21:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:00.797 21:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:00.797 21:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjFlMmJhY2E4Y2YyZWU3YzcyM2YzODRhODY4ZWQ2Yzg5YzE4YmIxNmJlZjU5NmMyGXebhg==: 00:16:00.797 21:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWRmYzgxZTExZDVkOWJiMjFkNDM2ZGM5MDA3NzM4OGZlMTU5YzY2ZjUzNGQ2YTU5Q3oTbw==: 00:16:00.797 21:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:00.797 21:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:00.797 21:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjFlMmJhY2E4Y2YyZWU3YzcyM2YzODRhODY4ZWQ2Yzg5YzE4YmIxNmJlZjU5NmMyGXebhg==: 00:16:00.797 21:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWRmYzgxZTExZDVkOWJiMjFkNDM2ZGM5MDA3NzM4OGZlMTU5YzY2ZjUzNGQ2YTU5Q3oTbw==: ]] 00:16:00.797 21:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWRmYzgxZTExZDVkOWJiMjFkNDM2ZGM5MDA3NzM4OGZlMTU5YzY2ZjUzNGQ2YTU5Q3oTbw==: 00:16:00.797 21:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:16:00.797 21:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:00.797 21:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:00.797 21:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:00.798 21:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:00.798 21:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:00.798 21:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:00.798 21:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.798 21:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:00.798 21:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.798 21:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:00.798 21:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:00.798 21:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:00.798 21:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:00.798 21:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:00.798 21:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:00.798 21:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:00.798 21:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:00.798 21:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:00.798 21:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:00.798 21:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:00.798 21:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:00.798 21:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.798 21:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:01.364 nvme0n1 00:16:01.364 21:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.364 21:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:01.364 21:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.364 21:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:01.364 21:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:01.364 21:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.364 21:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:01.364 21:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:01.364 21:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.364 21:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:01.364 21:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.364 21:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:01.364 21:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:16:01.364 21:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:01.364 21:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:01.364 21:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:01.364 21:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:01.364 21:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGQ3NjJhNGVjM2I1OGQxNDlmZjY1NjgyNjA5NGI4N2QrqGKh: 00:16:01.364 21:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTJjYmYyZDdhNDgwNmJhNTc3NDM2MmU5MGNlMjExNTL7ioPC: 00:16:01.364 21:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:01.364 21:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:01.364 21:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGQ3NjJhNGVjM2I1OGQxNDlmZjY1NjgyNjA5NGI4N2QrqGKh: 00:16:01.364 21:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTJjYmYyZDdhNDgwNmJhNTc3NDM2MmU5MGNlMjExNTL7ioPC: ]] 00:16:01.364 21:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTJjYmYyZDdhNDgwNmJhNTc3NDM2MmU5MGNlMjExNTL7ioPC: 00:16:01.364 21:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:16:01.364 21:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:01.365 21:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:01.365 21:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:01.365 21:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:01.365 21:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:01.365 21:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:01.365 21:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.365 21:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:01.365 21:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.365 21:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:01.365 21:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:01.365 21:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:01.365 21:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:01.365 21:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:01.365 21:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:01.365 21:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:01.365 21:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:01.365 21:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:01.365 21:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:01.365 21:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:01.365 21:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:01.365 21:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.365 21:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:01.932 nvme0n1 00:16:01.932 21:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.932 21:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:01.932 21:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.932 21:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:01.932 21:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:01.932 21:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.191 21:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:02.191 21:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:02.191 21:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.191 21:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:02.191 21:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.191 21:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:02.191 21:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:16:02.191 21:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:02.191 21:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:02.191 21:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:02.191 21:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:02.191 21:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDA5M2YzY2E5OTE4MWRkYjE0NGNmMWYwNmM5ZTYwZWVkN2MxZDM4MzFhYzYyMjY2kopZJA==: 00:16:02.191 21:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTQyYWQ2MzE5Mzk2ZTNiNGU0MGIwYTVlNTE5OWYwZGRSoKyV: 00:16:02.191 21:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:02.191 21:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:02.191 21:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDA5M2YzY2E5OTE4MWRkYjE0NGNmMWYwNmM5ZTYwZWVkN2MxZDM4MzFhYzYyMjY2kopZJA==: 00:16:02.191 21:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTQyYWQ2MzE5Mzk2ZTNiNGU0MGIwYTVlNTE5OWYwZGRSoKyV: ]] 00:16:02.191 21:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTQyYWQ2MzE5Mzk2ZTNiNGU0MGIwYTVlNTE5OWYwZGRSoKyV: 00:16:02.191 21:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:16:02.191 21:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:02.191 21:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:02.191 21:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:02.191 21:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:02.191 21:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:02.191 21:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:02.191 21:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.191 21:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:02.191 21:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.191 21:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:02.191 21:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:02.191 21:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:02.191 21:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:02.191 21:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:02.191 21:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:02.191 21:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:02.191 21:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:02.191 21:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:02.191 21:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:02.191 21:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:02.191 21:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:02.191 21:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.191 21:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:02.759 nvme0n1 00:16:02.759 21:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.759 21:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:02.759 21:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:02.759 21:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.759 21:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:02.759 21:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.759 21:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:02.759 21:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:02.759 21:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.759 21:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:02.759 21:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.759 21:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:02.759 21:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:16:02.759 21:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:02.759 21:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:02.759 21:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:02.759 21:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:02.759 21:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGUyYWRlMmRiZWVhMGM5NGJlNmEzYjZiZGNhMGQ4YjBmYmM2NWE2OWRjNjlhMDBjNzg3OTA1ZTY3ZTc4Y2RmYjiCprw=: 00:16:02.759 21:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:02.759 21:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:02.759 21:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:02.759 21:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGUyYWRlMmRiZWVhMGM5NGJlNmEzYjZiZGNhMGQ4YjBmYmM2NWE2OWRjNjlhMDBjNzg3OTA1ZTY3ZTc4Y2RmYjiCprw=: 00:16:02.759 21:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:02.759 21:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:16:02.759 21:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:02.759 21:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:02.759 21:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:02.759 21:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:02.759 21:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:02.759 21:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:02.759 21:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.759 21:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:02.759 21:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.760 21:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:02.760 21:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:02.760 21:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:02.760 21:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:02.760 21:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:02.760 21:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:02.760 21:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:02.760 21:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:02.760 21:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:02.760 21:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:02.760 21:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:02.760 21:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:02.760 21:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.760 21:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:03.326 nvme0n1 00:16:03.326 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.326 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:03.326 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:03.326 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.326 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:03.326 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.585 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:03.585 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:03.585 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.585 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:03.585 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.585 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:16:03.585 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:03.585 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:03.585 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:16:03.585 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:03.585 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:03.585 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:03.585 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:03.585 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjA1ZTVkYjA4YWI0MDZhYjcwZTcwYTdkYzVmOTE5YjUKpbVF: 00:16:03.585 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTNiYTUzMWY3YzBiODg0MjQzZjM3YTI2NzZhMzY2YTA1ODQ4YjQ0Mjg2YzJlOWNiNTVlM2FlM2FiNDE1ZjU5MXk4f1w=: 00:16:03.585 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:03.585 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:03.585 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjA1ZTVkYjA4YWI0MDZhYjcwZTcwYTdkYzVmOTE5YjUKpbVF: 00:16:03.585 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTNiYTUzMWY3YzBiODg0MjQzZjM3YTI2NzZhMzY2YTA1ODQ4YjQ0Mjg2YzJlOWNiNTVlM2FlM2FiNDE1ZjU5MXk4f1w=: ]] 00:16:03.585 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTNiYTUzMWY3YzBiODg0MjQzZjM3YTI2NzZhMzY2YTA1ODQ4YjQ0Mjg2YzJlOWNiNTVlM2FlM2FiNDE1ZjU5MXk4f1w=: 00:16:03.585 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:16:03.585 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:03.585 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:03.585 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:03.585 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:03.585 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:03.586 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:03.586 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.586 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:03.586 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.586 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:03.586 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:03.586 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:03.586 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:03.586 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:03.586 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:03.586 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:03.586 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:03.586 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:03.586 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:03.586 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:03.586 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:03.586 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.586 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:03.586 nvme0n1 00:16:03.586 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.586 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:03.586 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.586 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:03.586 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:03.586 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.586 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:03.586 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:03.586 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.586 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:03.586 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.586 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:03.586 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:16:03.586 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:03.586 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:03.586 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:03.586 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:03.586 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjFlMmJhY2E4Y2YyZWU3YzcyM2YzODRhODY4ZWQ2Yzg5YzE4YmIxNmJlZjU5NmMyGXebhg==: 00:16:03.586 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWRmYzgxZTExZDVkOWJiMjFkNDM2ZGM5MDA3NzM4OGZlMTU5YzY2ZjUzNGQ2YTU5Q3oTbw==: 00:16:03.586 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:03.586 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:03.586 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjFlMmJhY2E4Y2YyZWU3YzcyM2YzODRhODY4ZWQ2Yzg5YzE4YmIxNmJlZjU5NmMyGXebhg==: 00:16:03.586 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWRmYzgxZTExZDVkOWJiMjFkNDM2ZGM5MDA3NzM4OGZlMTU5YzY2ZjUzNGQ2YTU5Q3oTbw==: ]] 00:16:03.586 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWRmYzgxZTExZDVkOWJiMjFkNDM2ZGM5MDA3NzM4OGZlMTU5YzY2ZjUzNGQ2YTU5Q3oTbw==: 00:16:03.586 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:16:03.586 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:03.586 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:03.586 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:03.586 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:03.586 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:03.586 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:03.586 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.586 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:03.586 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.586 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:03.586 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:03.586 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:03.586 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:03.586 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:03.586 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:03.586 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:03.586 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:03.586 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:03.586 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:03.586 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:03.586 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:03.586 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.586 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:03.845 nvme0n1 00:16:03.845 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.845 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:03.845 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.845 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:03.845 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:03.845 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.845 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:03.845 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:03.845 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.845 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:03.845 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.845 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:03.845 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:16:03.845 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:03.845 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:03.845 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:03.845 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:03.845 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGQ3NjJhNGVjM2I1OGQxNDlmZjY1NjgyNjA5NGI4N2QrqGKh: 00:16:03.845 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTJjYmYyZDdhNDgwNmJhNTc3NDM2MmU5MGNlMjExNTL7ioPC: 00:16:03.845 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:03.845 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:03.845 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGQ3NjJhNGVjM2I1OGQxNDlmZjY1NjgyNjA5NGI4N2QrqGKh: 00:16:03.845 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTJjYmYyZDdhNDgwNmJhNTc3NDM2MmU5MGNlMjExNTL7ioPC: ]] 00:16:03.845 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTJjYmYyZDdhNDgwNmJhNTc3NDM2MmU5MGNlMjExNTL7ioPC: 00:16:03.845 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:16:03.845 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:03.845 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:03.845 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:03.845 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:03.845 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:03.845 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:03.845 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.845 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:03.845 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.845 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:03.845 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:03.845 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:03.845 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:03.845 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:03.845 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:03.845 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:03.845 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:03.845 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:03.845 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:03.845 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:03.845 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:03.845 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.845 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:03.845 nvme0n1 00:16:03.845 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.845 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:03.845 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:03.845 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.845 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:03.845 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.104 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:04.104 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:04.104 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.104 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:04.104 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.104 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:04.104 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:16:04.104 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:04.104 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:04.104 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:04.104 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:04.104 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDA5M2YzY2E5OTE4MWRkYjE0NGNmMWYwNmM5ZTYwZWVkN2MxZDM4MzFhYzYyMjY2kopZJA==: 00:16:04.104 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTQyYWQ2MzE5Mzk2ZTNiNGU0MGIwYTVlNTE5OWYwZGRSoKyV: 00:16:04.104 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:04.104 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:04.104 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDA5M2YzY2E5OTE4MWRkYjE0NGNmMWYwNmM5ZTYwZWVkN2MxZDM4MzFhYzYyMjY2kopZJA==: 00:16:04.104 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTQyYWQ2MzE5Mzk2ZTNiNGU0MGIwYTVlNTE5OWYwZGRSoKyV: ]] 00:16:04.104 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTQyYWQ2MzE5Mzk2ZTNiNGU0MGIwYTVlNTE5OWYwZGRSoKyV: 00:16:04.104 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:16:04.104 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:04.104 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:04.104 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:04.104 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:04.104 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:04.104 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:04.104 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.104 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:04.104 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.104 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:04.104 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:04.104 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:04.104 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:04.104 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:04.104 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:04.104 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:04.104 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:04.104 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:04.104 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:04.104 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:04.104 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:04.104 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.104 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:04.104 nvme0n1 00:16:04.104 21:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.104 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:04.104 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.104 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:04.104 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:04.104 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.104 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:04.104 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:04.104 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.104 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:04.104 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.104 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:04.104 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:16:04.104 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:04.104 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:04.104 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:04.104 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:04.104 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGUyYWRlMmRiZWVhMGM5NGJlNmEzYjZiZGNhMGQ4YjBmYmM2NWE2OWRjNjlhMDBjNzg3OTA1ZTY3ZTc4Y2RmYjiCprw=: 00:16:04.104 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:04.104 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:04.104 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:04.104 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGUyYWRlMmRiZWVhMGM5NGJlNmEzYjZiZGNhMGQ4YjBmYmM2NWE2OWRjNjlhMDBjNzg3OTA1ZTY3ZTc4Y2RmYjiCprw=: 00:16:04.104 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:04.105 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:16:04.105 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:04.105 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:04.105 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:04.105 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:04.105 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:04.105 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:04.105 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.105 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:04.105 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.105 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:04.105 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:04.105 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:04.105 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:04.105 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:04.105 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:04.105 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:04.105 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:04.105 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:04.105 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:04.105 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:04.105 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:04.105 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.105 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:04.362 nvme0n1 00:16:04.362 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.362 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:04.362 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.362 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:04.362 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:04.362 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.362 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:04.362 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:04.362 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.362 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:04.362 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.362 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:04.362 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:04.362 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:16:04.362 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:04.362 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:04.362 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:04.362 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:04.362 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjA1ZTVkYjA4YWI0MDZhYjcwZTcwYTdkYzVmOTE5YjUKpbVF: 00:16:04.362 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTNiYTUzMWY3YzBiODg0MjQzZjM3YTI2NzZhMzY2YTA1ODQ4YjQ0Mjg2YzJlOWNiNTVlM2FlM2FiNDE1ZjU5MXk4f1w=: 00:16:04.362 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:04.362 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:04.362 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjA1ZTVkYjA4YWI0MDZhYjcwZTcwYTdkYzVmOTE5YjUKpbVF: 00:16:04.362 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTNiYTUzMWY3YzBiODg0MjQzZjM3YTI2NzZhMzY2YTA1ODQ4YjQ0Mjg2YzJlOWNiNTVlM2FlM2FiNDE1ZjU5MXk4f1w=: ]] 00:16:04.362 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTNiYTUzMWY3YzBiODg0MjQzZjM3YTI2NzZhMzY2YTA1ODQ4YjQ0Mjg2YzJlOWNiNTVlM2FlM2FiNDE1ZjU5MXk4f1w=: 00:16:04.362 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:16:04.362 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:04.362 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:04.362 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:04.362 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:04.362 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:04.362 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:04.362 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.362 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:04.362 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.362 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:04.362 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:04.362 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:04.362 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:04.362 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:04.362 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:04.362 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:04.362 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:04.362 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:04.362 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:04.362 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:04.362 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:04.362 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.362 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:04.619 nvme0n1 00:16:04.619 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.619 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:04.619 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:04.619 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.619 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:04.619 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.619 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:04.619 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:04.619 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.619 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:04.619 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.619 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:04.619 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:16:04.619 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:04.619 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:04.619 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:04.619 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:04.619 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjFlMmJhY2E4Y2YyZWU3YzcyM2YzODRhODY4ZWQ2Yzg5YzE4YmIxNmJlZjU5NmMyGXebhg==: 00:16:04.619 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWRmYzgxZTExZDVkOWJiMjFkNDM2ZGM5MDA3NzM4OGZlMTU5YzY2ZjUzNGQ2YTU5Q3oTbw==: 00:16:04.619 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:04.619 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:04.619 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjFlMmJhY2E4Y2YyZWU3YzcyM2YzODRhODY4ZWQ2Yzg5YzE4YmIxNmJlZjU5NmMyGXebhg==: 00:16:04.619 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWRmYzgxZTExZDVkOWJiMjFkNDM2ZGM5MDA3NzM4OGZlMTU5YzY2ZjUzNGQ2YTU5Q3oTbw==: ]] 00:16:04.619 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWRmYzgxZTExZDVkOWJiMjFkNDM2ZGM5MDA3NzM4OGZlMTU5YzY2ZjUzNGQ2YTU5Q3oTbw==: 00:16:04.619 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:16:04.619 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:04.619 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:04.619 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:04.619 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:04.620 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:04.620 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:04.620 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.620 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:04.620 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.620 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:04.620 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:04.620 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:04.620 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:04.620 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:04.620 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:04.620 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:04.620 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:04.620 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:04.620 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:04.620 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:04.620 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:04.620 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.620 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:04.620 nvme0n1 00:16:04.620 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.620 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:04.620 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:04.620 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.620 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:04.620 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.877 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:04.877 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:04.877 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.877 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:04.877 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.877 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:04.877 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:16:04.877 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:04.877 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:04.877 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:04.877 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:04.877 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGQ3NjJhNGVjM2I1OGQxNDlmZjY1NjgyNjA5NGI4N2QrqGKh: 00:16:04.877 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTJjYmYyZDdhNDgwNmJhNTc3NDM2MmU5MGNlMjExNTL7ioPC: 00:16:04.877 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:04.877 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:04.877 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGQ3NjJhNGVjM2I1OGQxNDlmZjY1NjgyNjA5NGI4N2QrqGKh: 00:16:04.877 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTJjYmYyZDdhNDgwNmJhNTc3NDM2MmU5MGNlMjExNTL7ioPC: ]] 00:16:04.877 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTJjYmYyZDdhNDgwNmJhNTc3NDM2MmU5MGNlMjExNTL7ioPC: 00:16:04.877 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:16:04.877 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:04.877 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:04.877 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:04.877 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:04.877 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:04.877 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:04.877 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.877 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:04.877 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.877 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:04.877 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:04.877 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:04.877 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:04.877 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:04.877 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:04.877 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:04.877 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:04.877 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:04.877 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:04.877 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:04.877 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:04.877 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.877 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:04.877 nvme0n1 00:16:04.877 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.877 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:04.877 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:04.877 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.877 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:04.877 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.877 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:04.877 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:04.877 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.877 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:04.877 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.877 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:04.877 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:16:04.877 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:04.877 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:04.877 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:04.877 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:04.877 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDA5M2YzY2E5OTE4MWRkYjE0NGNmMWYwNmM5ZTYwZWVkN2MxZDM4MzFhYzYyMjY2kopZJA==: 00:16:04.877 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTQyYWQ2MzE5Mzk2ZTNiNGU0MGIwYTVlNTE5OWYwZGRSoKyV: 00:16:04.877 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:04.877 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:04.877 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDA5M2YzY2E5OTE4MWRkYjE0NGNmMWYwNmM5ZTYwZWVkN2MxZDM4MzFhYzYyMjY2kopZJA==: 00:16:04.877 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTQyYWQ2MzE5Mzk2ZTNiNGU0MGIwYTVlNTE5OWYwZGRSoKyV: ]] 00:16:04.877 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTQyYWQ2MzE5Mzk2ZTNiNGU0MGIwYTVlNTE5OWYwZGRSoKyV: 00:16:04.877 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:16:04.877 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:04.877 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:04.877 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:04.877 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:04.877 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:04.877 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:04.877 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.877 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:05.135 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.135 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:05.135 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:05.135 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:05.135 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:05.135 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:05.135 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:05.135 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:05.135 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:05.135 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:05.135 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:05.135 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:05.135 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:05.135 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.135 21:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:05.135 nvme0n1 00:16:05.135 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.135 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:05.135 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.135 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:05.135 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:05.135 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.135 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:05.135 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:05.135 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.135 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:05.135 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.135 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:05.135 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:16:05.135 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:05.135 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:05.135 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:05.135 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:05.135 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGUyYWRlMmRiZWVhMGM5NGJlNmEzYjZiZGNhMGQ4YjBmYmM2NWE2OWRjNjlhMDBjNzg3OTA1ZTY3ZTc4Y2RmYjiCprw=: 00:16:05.135 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:05.135 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:05.135 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:05.135 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGUyYWRlMmRiZWVhMGM5NGJlNmEzYjZiZGNhMGQ4YjBmYmM2NWE2OWRjNjlhMDBjNzg3OTA1ZTY3ZTc4Y2RmYjiCprw=: 00:16:05.135 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:05.135 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:16:05.135 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:05.135 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:05.135 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:05.135 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:05.135 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:05.135 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:05.135 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.136 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:05.136 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.136 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:05.136 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:05.136 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:05.136 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:05.136 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:05.136 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:05.136 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:05.136 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:05.136 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:05.136 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:05.136 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:05.136 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:05.136 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.136 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:05.394 nvme0n1 00:16:05.394 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.394 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:05.394 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:05.394 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.394 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:05.394 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.394 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:05.394 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:05.394 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.394 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:05.394 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.394 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:05.394 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:05.394 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:16:05.394 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:05.394 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:05.394 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:05.394 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:05.394 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjA1ZTVkYjA4YWI0MDZhYjcwZTcwYTdkYzVmOTE5YjUKpbVF: 00:16:05.394 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTNiYTUzMWY3YzBiODg0MjQzZjM3YTI2NzZhMzY2YTA1ODQ4YjQ0Mjg2YzJlOWNiNTVlM2FlM2FiNDE1ZjU5MXk4f1w=: 00:16:05.394 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:05.394 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:05.394 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjA1ZTVkYjA4YWI0MDZhYjcwZTcwYTdkYzVmOTE5YjUKpbVF: 00:16:05.394 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTNiYTUzMWY3YzBiODg0MjQzZjM3YTI2NzZhMzY2YTA1ODQ4YjQ0Mjg2YzJlOWNiNTVlM2FlM2FiNDE1ZjU5MXk4f1w=: ]] 00:16:05.394 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTNiYTUzMWY3YzBiODg0MjQzZjM3YTI2NzZhMzY2YTA1ODQ4YjQ0Mjg2YzJlOWNiNTVlM2FlM2FiNDE1ZjU5MXk4f1w=: 00:16:05.394 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:16:05.394 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:05.394 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:05.394 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:05.394 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:05.394 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:05.394 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:05.394 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.394 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:05.394 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.394 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:05.394 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:05.394 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:05.394 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:05.394 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:05.394 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:05.394 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:05.394 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:05.394 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:05.394 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:05.394 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:05.394 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:05.394 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.394 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:05.653 nvme0n1 00:16:05.653 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.653 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:05.653 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:05.653 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.653 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:05.653 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.653 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:05.653 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:05.653 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.653 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:05.653 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.653 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:05.653 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:16:05.653 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:05.653 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:05.653 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:05.653 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:05.653 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjFlMmJhY2E4Y2YyZWU3YzcyM2YzODRhODY4ZWQ2Yzg5YzE4YmIxNmJlZjU5NmMyGXebhg==: 00:16:05.653 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWRmYzgxZTExZDVkOWJiMjFkNDM2ZGM5MDA3NzM4OGZlMTU5YzY2ZjUzNGQ2YTU5Q3oTbw==: 00:16:05.653 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:05.653 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:05.653 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjFlMmJhY2E4Y2YyZWU3YzcyM2YzODRhODY4ZWQ2Yzg5YzE4YmIxNmJlZjU5NmMyGXebhg==: 00:16:05.653 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWRmYzgxZTExZDVkOWJiMjFkNDM2ZGM5MDA3NzM4OGZlMTU5YzY2ZjUzNGQ2YTU5Q3oTbw==: ]] 00:16:05.653 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWRmYzgxZTExZDVkOWJiMjFkNDM2ZGM5MDA3NzM4OGZlMTU5YzY2ZjUzNGQ2YTU5Q3oTbw==: 00:16:05.653 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:16:05.653 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:05.653 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:05.653 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:05.653 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:05.653 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:05.653 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:05.653 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.653 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:05.653 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.653 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:05.653 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:05.653 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:05.653 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:05.653 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:05.653 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:05.653 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:05.653 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:05.653 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:05.653 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:05.653 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:05.653 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:05.653 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.653 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:05.912 nvme0n1 00:16:05.912 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.912 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:05.912 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:05.912 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.912 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:05.912 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.912 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:05.912 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:05.912 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.912 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:05.912 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.912 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:05.912 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:16:05.912 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:05.912 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:05.912 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:05.912 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:05.912 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGQ3NjJhNGVjM2I1OGQxNDlmZjY1NjgyNjA5NGI4N2QrqGKh: 00:16:05.912 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTJjYmYyZDdhNDgwNmJhNTc3NDM2MmU5MGNlMjExNTL7ioPC: 00:16:05.912 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:05.912 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:05.912 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGQ3NjJhNGVjM2I1OGQxNDlmZjY1NjgyNjA5NGI4N2QrqGKh: 00:16:05.912 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTJjYmYyZDdhNDgwNmJhNTc3NDM2MmU5MGNlMjExNTL7ioPC: ]] 00:16:05.912 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTJjYmYyZDdhNDgwNmJhNTc3NDM2MmU5MGNlMjExNTL7ioPC: 00:16:05.912 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:16:05.912 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:05.912 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:05.912 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:05.912 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:05.912 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:05.912 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:05.912 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.912 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:05.912 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.912 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:05.912 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:05.912 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:05.912 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:05.912 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:05.912 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:05.912 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:05.912 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:05.912 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:05.912 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:05.912 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:05.912 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:05.912 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.912 21:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:06.170 nvme0n1 00:16:06.170 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.170 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:06.170 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:06.170 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.170 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:06.170 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.170 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:06.170 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:06.170 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.170 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:06.170 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.170 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:06.170 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:16:06.170 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:06.170 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:06.170 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:06.170 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:06.170 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDA5M2YzY2E5OTE4MWRkYjE0NGNmMWYwNmM5ZTYwZWVkN2MxZDM4MzFhYzYyMjY2kopZJA==: 00:16:06.170 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTQyYWQ2MzE5Mzk2ZTNiNGU0MGIwYTVlNTE5OWYwZGRSoKyV: 00:16:06.170 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:06.170 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:06.170 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDA5M2YzY2E5OTE4MWRkYjE0NGNmMWYwNmM5ZTYwZWVkN2MxZDM4MzFhYzYyMjY2kopZJA==: 00:16:06.170 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTQyYWQ2MzE5Mzk2ZTNiNGU0MGIwYTVlNTE5OWYwZGRSoKyV: ]] 00:16:06.170 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTQyYWQ2MzE5Mzk2ZTNiNGU0MGIwYTVlNTE5OWYwZGRSoKyV: 00:16:06.170 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:16:06.170 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:06.170 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:06.170 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:06.170 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:06.170 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:06.170 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:06.170 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.170 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:06.170 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.170 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:06.170 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:06.170 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:06.170 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:06.170 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:06.170 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:06.170 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:06.170 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:06.170 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:06.171 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:06.171 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:06.171 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:06.171 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.171 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:06.429 nvme0n1 00:16:06.429 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.429 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:06.429 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.429 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:06.429 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:06.429 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.429 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:06.429 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:06.429 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.429 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:06.429 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.429 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:06.429 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:16:06.429 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:06.429 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:06.429 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:06.429 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:06.429 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGUyYWRlMmRiZWVhMGM5NGJlNmEzYjZiZGNhMGQ4YjBmYmM2NWE2OWRjNjlhMDBjNzg3OTA1ZTY3ZTc4Y2RmYjiCprw=: 00:16:06.429 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:06.429 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:06.429 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:06.429 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGUyYWRlMmRiZWVhMGM5NGJlNmEzYjZiZGNhMGQ4YjBmYmM2NWE2OWRjNjlhMDBjNzg3OTA1ZTY3ZTc4Y2RmYjiCprw=: 00:16:06.429 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:06.429 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:16:06.429 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:06.429 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:06.429 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:06.429 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:06.429 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:06.429 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:06.429 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.429 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:06.429 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.429 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:06.429 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:06.429 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:06.429 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:06.429 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:06.429 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:06.429 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:06.429 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:06.429 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:06.429 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:06.429 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:06.429 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:06.429 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.429 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:06.687 nvme0n1 00:16:06.687 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.687 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:06.687 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.687 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:06.687 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:06.687 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.687 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:06.687 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:06.687 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.687 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:06.687 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.687 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:06.687 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:06.687 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:16:06.687 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:06.687 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:06.687 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:06.687 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:06.687 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjA1ZTVkYjA4YWI0MDZhYjcwZTcwYTdkYzVmOTE5YjUKpbVF: 00:16:06.687 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTNiYTUzMWY3YzBiODg0MjQzZjM3YTI2NzZhMzY2YTA1ODQ4YjQ0Mjg2YzJlOWNiNTVlM2FlM2FiNDE1ZjU5MXk4f1w=: 00:16:06.687 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:06.687 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:06.687 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjA1ZTVkYjA4YWI0MDZhYjcwZTcwYTdkYzVmOTE5YjUKpbVF: 00:16:06.687 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTNiYTUzMWY3YzBiODg0MjQzZjM3YTI2NzZhMzY2YTA1ODQ4YjQ0Mjg2YzJlOWNiNTVlM2FlM2FiNDE1ZjU5MXk4f1w=: ]] 00:16:06.687 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTNiYTUzMWY3YzBiODg0MjQzZjM3YTI2NzZhMzY2YTA1ODQ4YjQ0Mjg2YzJlOWNiNTVlM2FlM2FiNDE1ZjU5MXk4f1w=: 00:16:06.687 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:16:06.687 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:06.687 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:06.687 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:06.687 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:06.687 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:06.687 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:06.687 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.687 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:06.687 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.687 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:06.687 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:06.687 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:06.687 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:06.687 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:06.687 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:06.687 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:06.687 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:06.687 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:06.687 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:06.687 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:06.687 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:06.687 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.687 21:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:07.253 nvme0n1 00:16:07.253 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.253 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:07.253 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:07.253 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.253 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:07.253 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.253 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:07.253 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:07.253 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.253 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:07.253 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.253 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:07.253 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:16:07.253 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:07.253 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:07.253 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:07.253 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:07.253 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjFlMmJhY2E4Y2YyZWU3YzcyM2YzODRhODY4ZWQ2Yzg5YzE4YmIxNmJlZjU5NmMyGXebhg==: 00:16:07.253 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWRmYzgxZTExZDVkOWJiMjFkNDM2ZGM5MDA3NzM4OGZlMTU5YzY2ZjUzNGQ2YTU5Q3oTbw==: 00:16:07.253 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:07.253 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:07.253 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjFlMmJhY2E4Y2YyZWU3YzcyM2YzODRhODY4ZWQ2Yzg5YzE4YmIxNmJlZjU5NmMyGXebhg==: 00:16:07.253 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWRmYzgxZTExZDVkOWJiMjFkNDM2ZGM5MDA3NzM4OGZlMTU5YzY2ZjUzNGQ2YTU5Q3oTbw==: ]] 00:16:07.253 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWRmYzgxZTExZDVkOWJiMjFkNDM2ZGM5MDA3NzM4OGZlMTU5YzY2ZjUzNGQ2YTU5Q3oTbw==: 00:16:07.253 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:16:07.253 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:07.253 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:07.253 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:07.253 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:07.253 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:07.253 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:07.253 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.253 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:07.253 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.253 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:07.253 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:07.253 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:07.253 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:07.253 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:07.253 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:07.253 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:07.253 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:07.253 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:07.253 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:07.253 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:07.253 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:07.253 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.253 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:07.511 nvme0n1 00:16:07.511 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.511 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:07.511 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.511 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:07.511 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:07.511 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.511 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:07.511 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:07.511 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.511 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:07.511 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.511 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:07.511 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:16:07.511 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:07.511 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:07.511 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:07.511 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:07.511 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGQ3NjJhNGVjM2I1OGQxNDlmZjY1NjgyNjA5NGI4N2QrqGKh: 00:16:07.511 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTJjYmYyZDdhNDgwNmJhNTc3NDM2MmU5MGNlMjExNTL7ioPC: 00:16:07.511 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:07.511 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:07.511 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGQ3NjJhNGVjM2I1OGQxNDlmZjY1NjgyNjA5NGI4N2QrqGKh: 00:16:07.511 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTJjYmYyZDdhNDgwNmJhNTc3NDM2MmU5MGNlMjExNTL7ioPC: ]] 00:16:07.511 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTJjYmYyZDdhNDgwNmJhNTc3NDM2MmU5MGNlMjExNTL7ioPC: 00:16:07.511 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:16:07.511 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:07.511 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:07.511 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:07.511 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:07.511 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:07.511 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:07.511 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.511 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:07.769 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.769 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:07.769 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:07.769 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:07.769 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:07.769 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:07.769 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:07.769 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:07.769 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:07.769 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:07.769 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:07.769 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:07.769 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:07.769 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.769 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:08.027 nvme0n1 00:16:08.027 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.027 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:08.027 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:08.027 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.027 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:08.027 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.027 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:08.027 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:08.027 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.027 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:08.027 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.027 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:08.027 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:16:08.027 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:08.027 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:08.027 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:08.027 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:08.027 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDA5M2YzY2E5OTE4MWRkYjE0NGNmMWYwNmM5ZTYwZWVkN2MxZDM4MzFhYzYyMjY2kopZJA==: 00:16:08.027 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTQyYWQ2MzE5Mzk2ZTNiNGU0MGIwYTVlNTE5OWYwZGRSoKyV: 00:16:08.027 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:08.027 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:08.027 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDA5M2YzY2E5OTE4MWRkYjE0NGNmMWYwNmM5ZTYwZWVkN2MxZDM4MzFhYzYyMjY2kopZJA==: 00:16:08.027 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTQyYWQ2MzE5Mzk2ZTNiNGU0MGIwYTVlNTE5OWYwZGRSoKyV: ]] 00:16:08.027 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTQyYWQ2MzE5Mzk2ZTNiNGU0MGIwYTVlNTE5OWYwZGRSoKyV: 00:16:08.027 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:16:08.027 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:08.027 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:08.027 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:08.027 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:08.027 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:08.027 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:08.027 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.027 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:08.027 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.027 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:08.027 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:08.027 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:08.027 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:08.027 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:08.027 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:08.027 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:08.027 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:08.027 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:08.027 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:08.027 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:08.027 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:08.027 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.027 21:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:08.592 nvme0n1 00:16:08.592 21:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.592 21:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:08.592 21:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.592 21:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:08.592 21:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:08.592 21:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.592 21:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:08.592 21:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:08.592 21:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.592 21:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:08.592 21:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.592 21:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:08.592 21:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:16:08.592 21:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:08.592 21:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:08.592 21:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:08.592 21:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:08.592 21:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGUyYWRlMmRiZWVhMGM5NGJlNmEzYjZiZGNhMGQ4YjBmYmM2NWE2OWRjNjlhMDBjNzg3OTA1ZTY3ZTc4Y2RmYjiCprw=: 00:16:08.592 21:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:08.592 21:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:08.592 21:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:08.592 21:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGUyYWRlMmRiZWVhMGM5NGJlNmEzYjZiZGNhMGQ4YjBmYmM2NWE2OWRjNjlhMDBjNzg3OTA1ZTY3ZTc4Y2RmYjiCprw=: 00:16:08.592 21:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:08.592 21:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:16:08.592 21:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:08.592 21:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:08.592 21:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:08.592 21:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:08.592 21:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:08.592 21:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:08.592 21:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.592 21:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:08.592 21:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.592 21:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:08.592 21:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:08.592 21:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:08.592 21:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:08.592 21:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:08.592 21:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:08.592 21:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:08.592 21:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:08.592 21:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:08.592 21:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:08.592 21:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:08.592 21:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:08.592 21:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.592 21:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:08.850 nvme0n1 00:16:08.850 21:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.850 21:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:08.850 21:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:08.850 21:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.850 21:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:08.850 21:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.850 21:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:08.850 21:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:08.850 21:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.850 21:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:08.850 21:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.850 21:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:08.850 21:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:08.850 21:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:16:08.850 21:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:08.850 21:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:08.850 21:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:08.850 21:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:08.850 21:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjA1ZTVkYjA4YWI0MDZhYjcwZTcwYTdkYzVmOTE5YjUKpbVF: 00:16:08.850 21:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTNiYTUzMWY3YzBiODg0MjQzZjM3YTI2NzZhMzY2YTA1ODQ4YjQ0Mjg2YzJlOWNiNTVlM2FlM2FiNDE1ZjU5MXk4f1w=: 00:16:08.850 21:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:08.850 21:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:08.850 21:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjA1ZTVkYjA4YWI0MDZhYjcwZTcwYTdkYzVmOTE5YjUKpbVF: 00:16:08.850 21:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTNiYTUzMWY3YzBiODg0MjQzZjM3YTI2NzZhMzY2YTA1ODQ4YjQ0Mjg2YzJlOWNiNTVlM2FlM2FiNDE1ZjU5MXk4f1w=: ]] 00:16:08.850 21:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTNiYTUzMWY3YzBiODg0MjQzZjM3YTI2NzZhMzY2YTA1ODQ4YjQ0Mjg2YzJlOWNiNTVlM2FlM2FiNDE1ZjU5MXk4f1w=: 00:16:08.850 21:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:16:08.850 21:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:08.850 21:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:08.850 21:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:08.850 21:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:08.850 21:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:08.850 21:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:08.850 21:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.850 21:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:08.850 21:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.850 21:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:08.850 21:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:08.850 21:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:08.850 21:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:08.850 21:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:08.850 21:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:08.850 21:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:08.850 21:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:08.850 21:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:08.850 21:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:08.850 21:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:08.850 21:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:08.850 21:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.850 21:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:09.784 nvme0n1 00:16:09.784 21:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.784 21:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:09.784 21:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.784 21:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:09.784 21:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:09.784 21:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.784 21:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:09.784 21:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:09.784 21:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.784 21:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:09.784 21:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.784 21:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:09.784 21:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:16:09.784 21:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:09.784 21:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:09.784 21:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:09.784 21:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:09.784 21:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjFlMmJhY2E4Y2YyZWU3YzcyM2YzODRhODY4ZWQ2Yzg5YzE4YmIxNmJlZjU5NmMyGXebhg==: 00:16:09.784 21:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWRmYzgxZTExZDVkOWJiMjFkNDM2ZGM5MDA3NzM4OGZlMTU5YzY2ZjUzNGQ2YTU5Q3oTbw==: 00:16:09.784 21:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:09.784 21:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:09.784 21:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjFlMmJhY2E4Y2YyZWU3YzcyM2YzODRhODY4ZWQ2Yzg5YzE4YmIxNmJlZjU5NmMyGXebhg==: 00:16:09.784 21:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWRmYzgxZTExZDVkOWJiMjFkNDM2ZGM5MDA3NzM4OGZlMTU5YzY2ZjUzNGQ2YTU5Q3oTbw==: ]] 00:16:09.784 21:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWRmYzgxZTExZDVkOWJiMjFkNDM2ZGM5MDA3NzM4OGZlMTU5YzY2ZjUzNGQ2YTU5Q3oTbw==: 00:16:09.784 21:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:16:09.784 21:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:09.784 21:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:09.784 21:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:09.784 21:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:09.785 21:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:09.785 21:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:09.785 21:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.785 21:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:09.785 21:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.785 21:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:09.785 21:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:09.785 21:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:09.785 21:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:09.785 21:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:09.785 21:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:09.785 21:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:09.785 21:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:09.785 21:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:09.785 21:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:09.785 21:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:09.785 21:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:09.785 21:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.785 21:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:10.351 nvme0n1 00:16:10.351 21:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.351 21:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:10.351 21:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:10.351 21:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.351 21:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:10.351 21:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.351 21:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:10.351 21:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:10.351 21:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.351 21:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:10.351 21:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.351 21:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:10.351 21:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:16:10.351 21:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:10.351 21:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:10.351 21:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:10.351 21:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:10.351 21:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGQ3NjJhNGVjM2I1OGQxNDlmZjY1NjgyNjA5NGI4N2QrqGKh: 00:16:10.351 21:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTJjYmYyZDdhNDgwNmJhNTc3NDM2MmU5MGNlMjExNTL7ioPC: 00:16:10.351 21:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:10.351 21:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:10.351 21:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGQ3NjJhNGVjM2I1OGQxNDlmZjY1NjgyNjA5NGI4N2QrqGKh: 00:16:10.351 21:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTJjYmYyZDdhNDgwNmJhNTc3NDM2MmU5MGNlMjExNTL7ioPC: ]] 00:16:10.351 21:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTJjYmYyZDdhNDgwNmJhNTc3NDM2MmU5MGNlMjExNTL7ioPC: 00:16:10.351 21:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:16:10.351 21:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:10.351 21:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:10.351 21:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:10.351 21:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:10.351 21:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:10.351 21:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:10.351 21:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.351 21:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:10.351 21:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.351 21:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:10.352 21:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:10.352 21:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:10.352 21:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:10.352 21:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:10.352 21:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:10.352 21:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:10.352 21:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:10.352 21:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:10.352 21:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:10.352 21:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:10.352 21:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:10.352 21:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.352 21:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:10.918 nvme0n1 00:16:10.918 21:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.918 21:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:10.918 21:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:10.918 21:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.918 21:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:10.918 21:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.918 21:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:10.918 21:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:10.918 21:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.918 21:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:10.918 21:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.918 21:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:10.918 21:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:16:10.918 21:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:10.918 21:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:10.918 21:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:10.918 21:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:10.918 21:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDA5M2YzY2E5OTE4MWRkYjE0NGNmMWYwNmM5ZTYwZWVkN2MxZDM4MzFhYzYyMjY2kopZJA==: 00:16:10.918 21:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTQyYWQ2MzE5Mzk2ZTNiNGU0MGIwYTVlNTE5OWYwZGRSoKyV: 00:16:10.918 21:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:10.918 21:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:10.918 21:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDA5M2YzY2E5OTE4MWRkYjE0NGNmMWYwNmM5ZTYwZWVkN2MxZDM4MzFhYzYyMjY2kopZJA==: 00:16:10.919 21:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTQyYWQ2MzE5Mzk2ZTNiNGU0MGIwYTVlNTE5OWYwZGRSoKyV: ]] 00:16:10.919 21:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTQyYWQ2MzE5Mzk2ZTNiNGU0MGIwYTVlNTE5OWYwZGRSoKyV: 00:16:10.919 21:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:16:10.919 21:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:10.919 21:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:10.919 21:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:10.919 21:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:10.919 21:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:10.919 21:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:10.919 21:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.919 21:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:10.919 21:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.919 21:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:10.919 21:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:10.919 21:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:10.919 21:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:10.919 21:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:10.919 21:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:10.919 21:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:10.919 21:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:10.919 21:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:10.919 21:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:10.919 21:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:10.919 21:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:10.919 21:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.919 21:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:11.485 nvme0n1 00:16:11.485 21:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.485 21:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:11.485 21:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.485 21:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:11.485 21:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:11.744 21:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.744 21:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:11.744 21:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:11.744 21:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.744 21:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:11.744 21:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.744 21:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:11.744 21:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:16:11.744 21:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:11.744 21:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:11.744 21:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:11.744 21:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:11.744 21:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGUyYWRlMmRiZWVhMGM5NGJlNmEzYjZiZGNhMGQ4YjBmYmM2NWE2OWRjNjlhMDBjNzg3OTA1ZTY3ZTc4Y2RmYjiCprw=: 00:16:11.744 21:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:11.744 21:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:11.744 21:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:11.744 21:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGUyYWRlMmRiZWVhMGM5NGJlNmEzYjZiZGNhMGQ4YjBmYmM2NWE2OWRjNjlhMDBjNzg3OTA1ZTY3ZTc4Y2RmYjiCprw=: 00:16:11.744 21:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:11.744 21:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:16:11.744 21:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:11.744 21:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:11.744 21:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:11.744 21:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:11.744 21:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:11.744 21:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:11.744 21:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.744 21:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:11.744 21:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.744 21:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:11.744 21:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:11.744 21:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:11.744 21:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:11.744 21:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:11.744 21:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:11.744 21:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:11.744 21:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:11.744 21:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:11.744 21:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:11.744 21:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:11.744 21:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:11.744 21:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.744 21:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:12.311 nvme0n1 00:16:12.311 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.311 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:12.311 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:12.311 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.311 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:12.311 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.311 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:12.311 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:12.311 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.311 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:12.311 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.311 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:16:12.311 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:12.311 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:12.311 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:16:12.311 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:12.311 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:12.311 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:12.311 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:12.311 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjA1ZTVkYjA4YWI0MDZhYjcwZTcwYTdkYzVmOTE5YjUKpbVF: 00:16:12.311 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTNiYTUzMWY3YzBiODg0MjQzZjM3YTI2NzZhMzY2YTA1ODQ4YjQ0Mjg2YzJlOWNiNTVlM2FlM2FiNDE1ZjU5MXk4f1w=: 00:16:12.311 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:12.311 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:12.311 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjA1ZTVkYjA4YWI0MDZhYjcwZTcwYTdkYzVmOTE5YjUKpbVF: 00:16:12.311 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTNiYTUzMWY3YzBiODg0MjQzZjM3YTI2NzZhMzY2YTA1ODQ4YjQ0Mjg2YzJlOWNiNTVlM2FlM2FiNDE1ZjU5MXk4f1w=: ]] 00:16:12.311 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTNiYTUzMWY3YzBiODg0MjQzZjM3YTI2NzZhMzY2YTA1ODQ4YjQ0Mjg2YzJlOWNiNTVlM2FlM2FiNDE1ZjU5MXk4f1w=: 00:16:12.311 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:16:12.311 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:12.311 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:12.311 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:12.311 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:12.311 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:12.311 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:12.311 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.311 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:12.311 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.311 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:12.311 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:12.311 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:12.311 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:12.311 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:12.311 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:12.311 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:12.311 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:12.311 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:12.311 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:12.311 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:12.311 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:12.311 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.311 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:12.570 nvme0n1 00:16:12.570 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.570 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:12.570 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:12.570 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.570 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:12.570 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.570 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:12.570 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:12.570 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.570 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:12.570 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.570 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:12.570 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:16:12.570 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:12.570 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:12.570 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:12.570 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:12.570 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjFlMmJhY2E4Y2YyZWU3YzcyM2YzODRhODY4ZWQ2Yzg5YzE4YmIxNmJlZjU5NmMyGXebhg==: 00:16:12.570 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWRmYzgxZTExZDVkOWJiMjFkNDM2ZGM5MDA3NzM4OGZlMTU5YzY2ZjUzNGQ2YTU5Q3oTbw==: 00:16:12.570 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:12.570 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:12.570 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjFlMmJhY2E4Y2YyZWU3YzcyM2YzODRhODY4ZWQ2Yzg5YzE4YmIxNmJlZjU5NmMyGXebhg==: 00:16:12.570 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWRmYzgxZTExZDVkOWJiMjFkNDM2ZGM5MDA3NzM4OGZlMTU5YzY2ZjUzNGQ2YTU5Q3oTbw==: ]] 00:16:12.570 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWRmYzgxZTExZDVkOWJiMjFkNDM2ZGM5MDA3NzM4OGZlMTU5YzY2ZjUzNGQ2YTU5Q3oTbw==: 00:16:12.570 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:16:12.570 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:12.570 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:12.570 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:12.570 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:12.570 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:12.570 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:12.570 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.570 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:12.570 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.570 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:12.570 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:12.570 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:12.570 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:12.570 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:12.570 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:12.570 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:12.570 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:12.570 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:12.570 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:12.570 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:12.570 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:12.570 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.570 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:12.570 nvme0n1 00:16:12.570 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.570 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:12.570 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:12.570 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.570 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:12.570 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.829 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:12.829 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:12.829 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.829 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:12.829 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.829 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:12.829 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:16:12.829 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:12.829 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:12.829 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:12.829 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:12.829 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGQ3NjJhNGVjM2I1OGQxNDlmZjY1NjgyNjA5NGI4N2QrqGKh: 00:16:12.829 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTJjYmYyZDdhNDgwNmJhNTc3NDM2MmU5MGNlMjExNTL7ioPC: 00:16:12.829 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:12.829 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:12.829 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGQ3NjJhNGVjM2I1OGQxNDlmZjY1NjgyNjA5NGI4N2QrqGKh: 00:16:12.829 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTJjYmYyZDdhNDgwNmJhNTc3NDM2MmU5MGNlMjExNTL7ioPC: ]] 00:16:12.829 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTJjYmYyZDdhNDgwNmJhNTc3NDM2MmU5MGNlMjExNTL7ioPC: 00:16:12.829 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:16:12.829 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:12.829 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:12.829 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:12.829 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:12.829 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:12.829 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:12.830 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.830 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:12.830 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.830 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:12.830 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:12.830 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:12.830 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:12.830 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:12.830 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:12.830 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:12.830 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:12.830 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:12.830 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:12.830 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:12.830 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:12.830 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.830 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:12.830 nvme0n1 00:16:12.830 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.830 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:12.830 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.830 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:12.830 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:12.830 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.830 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:12.830 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:12.830 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.830 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:12.830 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.830 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:12.830 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:16:12.830 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:12.830 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:12.830 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:12.830 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:12.830 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDA5M2YzY2E5OTE4MWRkYjE0NGNmMWYwNmM5ZTYwZWVkN2MxZDM4MzFhYzYyMjY2kopZJA==: 00:16:12.830 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTQyYWQ2MzE5Mzk2ZTNiNGU0MGIwYTVlNTE5OWYwZGRSoKyV: 00:16:12.830 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:12.830 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:12.830 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDA5M2YzY2E5OTE4MWRkYjE0NGNmMWYwNmM5ZTYwZWVkN2MxZDM4MzFhYzYyMjY2kopZJA==: 00:16:12.830 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTQyYWQ2MzE5Mzk2ZTNiNGU0MGIwYTVlNTE5OWYwZGRSoKyV: ]] 00:16:12.830 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTQyYWQ2MzE5Mzk2ZTNiNGU0MGIwYTVlNTE5OWYwZGRSoKyV: 00:16:12.830 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:16:12.830 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:12.830 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:12.830 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:12.830 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:12.830 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:12.830 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:12.830 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.830 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:12.830 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.830 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:12.830 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:12.830 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:12.830 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:12.830 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:12.830 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:12.830 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:12.830 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:12.830 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:12.830 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:12.830 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:12.830 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:12.830 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.830 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:13.088 nvme0n1 00:16:13.089 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.089 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:13.089 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.089 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:13.089 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:13.089 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.089 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:13.089 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:13.089 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.089 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:13.089 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.089 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:13.089 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:16:13.089 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:13.089 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:13.089 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:13.089 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:13.089 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGUyYWRlMmRiZWVhMGM5NGJlNmEzYjZiZGNhMGQ4YjBmYmM2NWE2OWRjNjlhMDBjNzg3OTA1ZTY3ZTc4Y2RmYjiCprw=: 00:16:13.089 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:13.089 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:13.089 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:13.089 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGUyYWRlMmRiZWVhMGM5NGJlNmEzYjZiZGNhMGQ4YjBmYmM2NWE2OWRjNjlhMDBjNzg3OTA1ZTY3ZTc4Y2RmYjiCprw=: 00:16:13.089 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:13.089 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:16:13.089 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:13.089 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:13.089 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:13.089 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:13.089 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:13.089 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:13.089 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.089 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:13.089 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.089 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:13.089 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:13.089 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:13.089 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:13.089 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:13.089 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:13.089 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:13.089 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:13.089 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:13.089 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:13.089 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:13.089 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:13.089 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.089 21:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:13.347 nvme0n1 00:16:13.347 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.347 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:13.347 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:13.347 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.348 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:13.348 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.348 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:13.348 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:13.348 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.348 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:13.348 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.348 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:13.348 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:13.348 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:16:13.348 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:13.348 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:13.348 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:13.348 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:13.348 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjA1ZTVkYjA4YWI0MDZhYjcwZTcwYTdkYzVmOTE5YjUKpbVF: 00:16:13.348 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTNiYTUzMWY3YzBiODg0MjQzZjM3YTI2NzZhMzY2YTA1ODQ4YjQ0Mjg2YzJlOWNiNTVlM2FlM2FiNDE1ZjU5MXk4f1w=: 00:16:13.348 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:13.348 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:13.348 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjA1ZTVkYjA4YWI0MDZhYjcwZTcwYTdkYzVmOTE5YjUKpbVF: 00:16:13.348 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTNiYTUzMWY3YzBiODg0MjQzZjM3YTI2NzZhMzY2YTA1ODQ4YjQ0Mjg2YzJlOWNiNTVlM2FlM2FiNDE1ZjU5MXk4f1w=: ]] 00:16:13.348 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTNiYTUzMWY3YzBiODg0MjQzZjM3YTI2NzZhMzY2YTA1ODQ4YjQ0Mjg2YzJlOWNiNTVlM2FlM2FiNDE1ZjU5MXk4f1w=: 00:16:13.348 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:16:13.348 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:13.348 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:13.348 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:13.348 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:13.348 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:13.348 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:13.348 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.348 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:13.348 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.348 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:13.348 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:13.348 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:13.348 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:13.348 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:13.348 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:13.348 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:13.348 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:13.348 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:13.348 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:13.348 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:13.348 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:13.348 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.348 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:13.348 nvme0n1 00:16:13.348 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.348 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:13.348 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:13.348 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.348 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:13.348 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.607 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:13.607 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:13.607 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.607 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:13.607 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.607 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:13.607 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:16:13.607 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:13.607 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:13.607 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:13.607 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:13.607 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjFlMmJhY2E4Y2YyZWU3YzcyM2YzODRhODY4ZWQ2Yzg5YzE4YmIxNmJlZjU5NmMyGXebhg==: 00:16:13.607 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWRmYzgxZTExZDVkOWJiMjFkNDM2ZGM5MDA3NzM4OGZlMTU5YzY2ZjUzNGQ2YTU5Q3oTbw==: 00:16:13.607 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:13.607 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:13.607 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjFlMmJhY2E4Y2YyZWU3YzcyM2YzODRhODY4ZWQ2Yzg5YzE4YmIxNmJlZjU5NmMyGXebhg==: 00:16:13.607 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWRmYzgxZTExZDVkOWJiMjFkNDM2ZGM5MDA3NzM4OGZlMTU5YzY2ZjUzNGQ2YTU5Q3oTbw==: ]] 00:16:13.607 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWRmYzgxZTExZDVkOWJiMjFkNDM2ZGM5MDA3NzM4OGZlMTU5YzY2ZjUzNGQ2YTU5Q3oTbw==: 00:16:13.607 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:16:13.607 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:13.607 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:13.607 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:13.607 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:13.607 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:13.607 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:13.607 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.607 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:13.607 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.607 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:13.607 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:13.607 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:13.607 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:13.607 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:13.607 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:13.607 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:13.608 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:13.608 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:13.608 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:13.608 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:13.608 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:13.608 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.608 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:13.608 nvme0n1 00:16:13.608 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.608 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:13.608 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.608 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:13.608 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:13.608 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.608 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:13.608 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:13.608 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.608 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:13.608 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.608 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:13.608 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:16:13.608 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:13.608 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:13.608 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:13.608 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:13.608 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGQ3NjJhNGVjM2I1OGQxNDlmZjY1NjgyNjA5NGI4N2QrqGKh: 00:16:13.608 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTJjYmYyZDdhNDgwNmJhNTc3NDM2MmU5MGNlMjExNTL7ioPC: 00:16:13.608 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:13.608 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:13.608 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGQ3NjJhNGVjM2I1OGQxNDlmZjY1NjgyNjA5NGI4N2QrqGKh: 00:16:13.608 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTJjYmYyZDdhNDgwNmJhNTc3NDM2MmU5MGNlMjExNTL7ioPC: ]] 00:16:13.608 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTJjYmYyZDdhNDgwNmJhNTc3NDM2MmU5MGNlMjExNTL7ioPC: 00:16:13.608 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:16:13.608 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:13.608 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:13.608 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:13.608 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:13.608 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:13.608 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:13.608 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.608 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:13.608 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.608 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:13.608 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:13.608 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:13.608 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:13.608 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:13.608 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:13.608 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:13.608 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:13.608 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:13.608 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:13.608 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:13.608 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:13.867 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.867 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:13.867 nvme0n1 00:16:13.867 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.867 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:13.867 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.867 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:13.867 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:13.867 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.867 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:13.867 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:13.867 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.867 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:13.867 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.867 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:13.867 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:16:13.867 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:13.867 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:13.867 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:13.867 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:13.867 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDA5M2YzY2E5OTE4MWRkYjE0NGNmMWYwNmM5ZTYwZWVkN2MxZDM4MzFhYzYyMjY2kopZJA==: 00:16:13.867 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTQyYWQ2MzE5Mzk2ZTNiNGU0MGIwYTVlNTE5OWYwZGRSoKyV: 00:16:13.867 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:13.867 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:13.867 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDA5M2YzY2E5OTE4MWRkYjE0NGNmMWYwNmM5ZTYwZWVkN2MxZDM4MzFhYzYyMjY2kopZJA==: 00:16:13.867 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTQyYWQ2MzE5Mzk2ZTNiNGU0MGIwYTVlNTE5OWYwZGRSoKyV: ]] 00:16:13.867 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTQyYWQ2MzE5Mzk2ZTNiNGU0MGIwYTVlNTE5OWYwZGRSoKyV: 00:16:13.867 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:16:13.867 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:13.867 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:13.867 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:13.867 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:13.867 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:13.867 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:13.867 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.867 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:13.867 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.867 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:13.867 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:13.867 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:13.867 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:13.867 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:13.867 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:13.867 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:13.867 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:13.867 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:13.867 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:13.867 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:13.867 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:13.867 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.867 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:14.126 nvme0n1 00:16:14.126 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.126 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:14.126 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.126 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:14.126 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:14.126 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.126 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:14.126 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:14.126 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.126 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:14.126 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.126 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:14.126 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:16:14.126 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:14.126 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:14.126 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:14.126 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:14.126 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGUyYWRlMmRiZWVhMGM5NGJlNmEzYjZiZGNhMGQ4YjBmYmM2NWE2OWRjNjlhMDBjNzg3OTA1ZTY3ZTc4Y2RmYjiCprw=: 00:16:14.126 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:14.126 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:14.126 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:14.126 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGUyYWRlMmRiZWVhMGM5NGJlNmEzYjZiZGNhMGQ4YjBmYmM2NWE2OWRjNjlhMDBjNzg3OTA1ZTY3ZTc4Y2RmYjiCprw=: 00:16:14.126 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:14.126 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:16:14.126 21:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:14.126 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:14.126 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:14.126 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:14.126 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:14.126 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:14.126 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.126 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:14.126 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.126 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:14.126 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:14.126 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:14.126 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:14.126 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:14.126 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:14.126 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:14.126 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:14.126 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:14.126 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:14.126 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:14.126 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:14.126 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.126 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:14.393 nvme0n1 00:16:14.393 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.393 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:14.393 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:14.393 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.393 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:14.393 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.393 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:14.393 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:14.393 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.393 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:14.393 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.393 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:14.393 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:14.393 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:16:14.393 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:14.393 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:14.393 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:14.393 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:14.393 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjA1ZTVkYjA4YWI0MDZhYjcwZTcwYTdkYzVmOTE5YjUKpbVF: 00:16:14.393 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTNiYTUzMWY3YzBiODg0MjQzZjM3YTI2NzZhMzY2YTA1ODQ4YjQ0Mjg2YzJlOWNiNTVlM2FlM2FiNDE1ZjU5MXk4f1w=: 00:16:14.393 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:14.393 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:14.393 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjA1ZTVkYjA4YWI0MDZhYjcwZTcwYTdkYzVmOTE5YjUKpbVF: 00:16:14.393 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTNiYTUzMWY3YzBiODg0MjQzZjM3YTI2NzZhMzY2YTA1ODQ4YjQ0Mjg2YzJlOWNiNTVlM2FlM2FiNDE1ZjU5MXk4f1w=: ]] 00:16:14.393 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTNiYTUzMWY3YzBiODg0MjQzZjM3YTI2NzZhMzY2YTA1ODQ4YjQ0Mjg2YzJlOWNiNTVlM2FlM2FiNDE1ZjU5MXk4f1w=: 00:16:14.393 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:16:14.393 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:14.393 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:14.393 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:14.393 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:14.393 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:14.393 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:14.393 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.393 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:14.393 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.393 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:14.393 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:14.393 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:14.393 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:14.393 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:14.393 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:14.393 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:14.393 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:14.393 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:14.393 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:14.393 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:14.393 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:14.393 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.393 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:14.664 nvme0n1 00:16:14.664 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.664 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:14.664 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.664 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:14.664 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:14.664 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.664 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:14.664 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:14.664 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.664 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:14.664 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.664 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:14.664 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:16:14.664 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:14.664 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:14.664 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:14.664 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:14.664 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjFlMmJhY2E4Y2YyZWU3YzcyM2YzODRhODY4ZWQ2Yzg5YzE4YmIxNmJlZjU5NmMyGXebhg==: 00:16:14.664 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWRmYzgxZTExZDVkOWJiMjFkNDM2ZGM5MDA3NzM4OGZlMTU5YzY2ZjUzNGQ2YTU5Q3oTbw==: 00:16:14.664 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:14.664 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:14.664 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjFlMmJhY2E4Y2YyZWU3YzcyM2YzODRhODY4ZWQ2Yzg5YzE4YmIxNmJlZjU5NmMyGXebhg==: 00:16:14.664 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWRmYzgxZTExZDVkOWJiMjFkNDM2ZGM5MDA3NzM4OGZlMTU5YzY2ZjUzNGQ2YTU5Q3oTbw==: ]] 00:16:14.664 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWRmYzgxZTExZDVkOWJiMjFkNDM2ZGM5MDA3NzM4OGZlMTU5YzY2ZjUzNGQ2YTU5Q3oTbw==: 00:16:14.664 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:16:14.664 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:14.664 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:14.664 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:14.664 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:14.664 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:14.664 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:14.664 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.664 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:14.664 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.664 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:14.664 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:14.664 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:14.664 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:14.664 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:14.664 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:14.664 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:14.664 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:14.664 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:14.664 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:14.664 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:14.664 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:14.664 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.664 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:14.923 nvme0n1 00:16:14.923 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.923 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:14.923 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.923 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:14.923 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:14.923 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.923 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:14.923 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:14.923 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.923 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:14.923 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.923 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:14.923 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:16:14.923 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:14.923 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:14.923 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:14.923 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:14.923 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGQ3NjJhNGVjM2I1OGQxNDlmZjY1NjgyNjA5NGI4N2QrqGKh: 00:16:14.923 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTJjYmYyZDdhNDgwNmJhNTc3NDM2MmU5MGNlMjExNTL7ioPC: 00:16:14.923 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:14.923 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:14.923 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGQ3NjJhNGVjM2I1OGQxNDlmZjY1NjgyNjA5NGI4N2QrqGKh: 00:16:14.923 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTJjYmYyZDdhNDgwNmJhNTc3NDM2MmU5MGNlMjExNTL7ioPC: ]] 00:16:14.923 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTJjYmYyZDdhNDgwNmJhNTc3NDM2MmU5MGNlMjExNTL7ioPC: 00:16:14.923 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:16:14.923 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:14.923 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:14.923 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:14.923 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:14.923 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:14.923 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:14.923 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.923 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:14.923 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.923 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:14.923 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:14.923 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:14.923 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:14.923 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:14.923 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:14.923 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:14.923 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:14.923 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:14.923 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:14.923 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:14.923 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:14.924 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.924 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:15.183 nvme0n1 00:16:15.183 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.183 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:15.183 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.183 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:15.183 21:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:15.183 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.183 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:15.183 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:15.183 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.183 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:15.183 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.183 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:15.183 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:16:15.183 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:15.183 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:15.183 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:15.183 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:15.183 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDA5M2YzY2E5OTE4MWRkYjE0NGNmMWYwNmM5ZTYwZWVkN2MxZDM4MzFhYzYyMjY2kopZJA==: 00:16:15.183 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTQyYWQ2MzE5Mzk2ZTNiNGU0MGIwYTVlNTE5OWYwZGRSoKyV: 00:16:15.183 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:15.183 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:15.183 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDA5M2YzY2E5OTE4MWRkYjE0NGNmMWYwNmM5ZTYwZWVkN2MxZDM4MzFhYzYyMjY2kopZJA==: 00:16:15.183 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTQyYWQ2MzE5Mzk2ZTNiNGU0MGIwYTVlNTE5OWYwZGRSoKyV: ]] 00:16:15.183 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTQyYWQ2MzE5Mzk2ZTNiNGU0MGIwYTVlNTE5OWYwZGRSoKyV: 00:16:15.183 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:16:15.183 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:15.183 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:15.183 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:15.183 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:15.183 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:15.183 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:15.183 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.183 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:15.183 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.183 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:15.183 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:15.183 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:15.183 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:15.183 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:15.183 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:15.183 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:15.183 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:15.183 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:15.183 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:15.183 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:15.183 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:15.183 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.183 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:15.443 nvme0n1 00:16:15.443 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.443 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:15.443 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:15.443 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.443 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:15.443 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.443 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:15.443 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:15.443 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.443 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:15.443 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.443 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:15.443 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:16:15.443 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:15.443 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:15.443 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:15.443 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:15.443 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGUyYWRlMmRiZWVhMGM5NGJlNmEzYjZiZGNhMGQ4YjBmYmM2NWE2OWRjNjlhMDBjNzg3OTA1ZTY3ZTc4Y2RmYjiCprw=: 00:16:15.443 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:15.443 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:15.443 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:15.443 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGUyYWRlMmRiZWVhMGM5NGJlNmEzYjZiZGNhMGQ4YjBmYmM2NWE2OWRjNjlhMDBjNzg3OTA1ZTY3ZTc4Y2RmYjiCprw=: 00:16:15.443 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:15.443 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:16:15.443 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:15.443 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:15.443 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:15.443 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:15.443 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:15.443 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:15.443 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.443 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:15.443 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.443 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:15.443 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:15.443 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:15.443 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:15.443 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:15.443 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:15.443 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:15.443 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:15.443 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:15.443 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:15.443 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:15.443 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:15.443 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.443 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:15.702 nvme0n1 00:16:15.702 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.702 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:15.702 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.702 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:15.702 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:15.702 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.702 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:15.702 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:15.702 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.702 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:15.702 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.702 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:15.702 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:15.702 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:16:15.702 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:15.702 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:15.702 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:15.702 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:15.702 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjA1ZTVkYjA4YWI0MDZhYjcwZTcwYTdkYzVmOTE5YjUKpbVF: 00:16:15.702 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTNiYTUzMWY3YzBiODg0MjQzZjM3YTI2NzZhMzY2YTA1ODQ4YjQ0Mjg2YzJlOWNiNTVlM2FlM2FiNDE1ZjU5MXk4f1w=: 00:16:15.702 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:15.702 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:15.702 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjA1ZTVkYjA4YWI0MDZhYjcwZTcwYTdkYzVmOTE5YjUKpbVF: 00:16:15.702 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTNiYTUzMWY3YzBiODg0MjQzZjM3YTI2NzZhMzY2YTA1ODQ4YjQ0Mjg2YzJlOWNiNTVlM2FlM2FiNDE1ZjU5MXk4f1w=: ]] 00:16:15.702 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTNiYTUzMWY3YzBiODg0MjQzZjM3YTI2NzZhMzY2YTA1ODQ4YjQ0Mjg2YzJlOWNiNTVlM2FlM2FiNDE1ZjU5MXk4f1w=: 00:16:15.702 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:16:15.702 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:15.702 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:15.702 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:15.702 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:15.702 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:15.702 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:15.702 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.702 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:15.702 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.703 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:15.703 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:15.703 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:15.703 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:15.703 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:15.703 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:15.703 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:15.703 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:15.703 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:15.703 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:15.703 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:15.703 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:15.703 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.703 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:16.270 nvme0n1 00:16:16.270 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.270 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:16.270 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:16.270 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.270 21:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:16.270 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.270 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:16.270 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:16.270 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.270 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:16.270 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.270 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:16.270 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:16:16.270 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:16.270 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:16.270 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:16.270 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:16.271 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjFlMmJhY2E4Y2YyZWU3YzcyM2YzODRhODY4ZWQ2Yzg5YzE4YmIxNmJlZjU5NmMyGXebhg==: 00:16:16.271 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWRmYzgxZTExZDVkOWJiMjFkNDM2ZGM5MDA3NzM4OGZlMTU5YzY2ZjUzNGQ2YTU5Q3oTbw==: 00:16:16.271 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:16.271 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:16.271 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjFlMmJhY2E4Y2YyZWU3YzcyM2YzODRhODY4ZWQ2Yzg5YzE4YmIxNmJlZjU5NmMyGXebhg==: 00:16:16.271 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWRmYzgxZTExZDVkOWJiMjFkNDM2ZGM5MDA3NzM4OGZlMTU5YzY2ZjUzNGQ2YTU5Q3oTbw==: ]] 00:16:16.271 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWRmYzgxZTExZDVkOWJiMjFkNDM2ZGM5MDA3NzM4OGZlMTU5YzY2ZjUzNGQ2YTU5Q3oTbw==: 00:16:16.271 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:16:16.271 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:16.271 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:16.271 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:16.271 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:16.271 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:16.271 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:16.271 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.271 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:16.271 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.271 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:16.271 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:16.271 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:16.271 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:16.271 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:16.271 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:16.271 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:16.271 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:16.271 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:16.271 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:16.271 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:16.271 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:16.271 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.271 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:16.530 nvme0n1 00:16:16.530 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.530 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:16.530 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.530 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:16.530 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:16.530 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.530 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:16.530 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:16.530 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.530 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:16.530 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.530 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:16.530 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:16:16.530 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:16.530 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:16.530 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:16.530 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:16.530 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGQ3NjJhNGVjM2I1OGQxNDlmZjY1NjgyNjA5NGI4N2QrqGKh: 00:16:16.530 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTJjYmYyZDdhNDgwNmJhNTc3NDM2MmU5MGNlMjExNTL7ioPC: 00:16:16.530 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:16.530 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:16.530 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGQ3NjJhNGVjM2I1OGQxNDlmZjY1NjgyNjA5NGI4N2QrqGKh: 00:16:16.530 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTJjYmYyZDdhNDgwNmJhNTc3NDM2MmU5MGNlMjExNTL7ioPC: ]] 00:16:16.530 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTJjYmYyZDdhNDgwNmJhNTc3NDM2MmU5MGNlMjExNTL7ioPC: 00:16:16.530 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:16:16.530 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:16.530 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:16.530 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:16.530 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:16.530 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:16.530 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:16.530 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.530 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:16.530 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.530 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:16.530 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:16.530 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:16.530 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:16.530 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:16.530 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:16.530 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:16.530 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:16.530 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:16.530 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:16.530 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:16.530 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:16.530 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.531 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:17.097 nvme0n1 00:16:17.098 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.098 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:17.098 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:17.098 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.098 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:17.098 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.098 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:17.098 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:17.098 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.098 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:17.098 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.098 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:17.098 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:16:17.098 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:17.098 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:17.098 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:17.098 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:17.098 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDA5M2YzY2E5OTE4MWRkYjE0NGNmMWYwNmM5ZTYwZWVkN2MxZDM4MzFhYzYyMjY2kopZJA==: 00:16:17.098 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTQyYWQ2MzE5Mzk2ZTNiNGU0MGIwYTVlNTE5OWYwZGRSoKyV: 00:16:17.098 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:17.098 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:17.098 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDA5M2YzY2E5OTE4MWRkYjE0NGNmMWYwNmM5ZTYwZWVkN2MxZDM4MzFhYzYyMjY2kopZJA==: 00:16:17.098 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTQyYWQ2MzE5Mzk2ZTNiNGU0MGIwYTVlNTE5OWYwZGRSoKyV: ]] 00:16:17.098 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTQyYWQ2MzE5Mzk2ZTNiNGU0MGIwYTVlNTE5OWYwZGRSoKyV: 00:16:17.098 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:16:17.098 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:17.098 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:17.098 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:17.098 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:17.098 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:17.098 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:17.098 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.098 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:17.098 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.098 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:17.098 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:17.098 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:17.098 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:17.098 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:17.098 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:17.098 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:17.098 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:17.098 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:17.098 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:17.098 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:17.098 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:17.098 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.098 21:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:17.357 nvme0n1 00:16:17.357 21:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.357 21:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:17.357 21:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.357 21:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:17.357 21:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:17.357 21:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.616 21:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:17.616 21:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:17.616 21:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.616 21:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:17.616 21:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.616 21:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:17.616 21:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:16:17.616 21:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:17.616 21:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:17.616 21:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:17.616 21:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:17.616 21:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGUyYWRlMmRiZWVhMGM5NGJlNmEzYjZiZGNhMGQ4YjBmYmM2NWE2OWRjNjlhMDBjNzg3OTA1ZTY3ZTc4Y2RmYjiCprw=: 00:16:17.616 21:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:17.616 21:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:17.616 21:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:17.616 21:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGUyYWRlMmRiZWVhMGM5NGJlNmEzYjZiZGNhMGQ4YjBmYmM2NWE2OWRjNjlhMDBjNzg3OTA1ZTY3ZTc4Y2RmYjiCprw=: 00:16:17.616 21:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:17.616 21:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:16:17.616 21:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:17.616 21:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:17.616 21:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:17.616 21:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:17.616 21:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:17.616 21:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:17.616 21:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.616 21:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:17.616 21:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.616 21:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:17.616 21:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:17.616 21:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:17.616 21:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:17.616 21:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:17.616 21:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:17.616 21:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:17.616 21:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:17.616 21:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:17.616 21:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:17.616 21:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:17.616 21:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:17.616 21:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.616 21:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:17.875 nvme0n1 00:16:17.875 21:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.875 21:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:17.875 21:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.875 21:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:17.875 21:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:17.875 21:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.875 21:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:17.875 21:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:17.875 21:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.875 21:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:17.875 21:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.875 21:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:17.875 21:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:17.875 21:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:16:17.875 21:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:17.875 21:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:17.875 21:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:17.875 21:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:17.875 21:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjA1ZTVkYjA4YWI0MDZhYjcwZTcwYTdkYzVmOTE5YjUKpbVF: 00:16:17.875 21:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTNiYTUzMWY3YzBiODg0MjQzZjM3YTI2NzZhMzY2YTA1ODQ4YjQ0Mjg2YzJlOWNiNTVlM2FlM2FiNDE1ZjU5MXk4f1w=: 00:16:17.875 21:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:17.875 21:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:17.875 21:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjA1ZTVkYjA4YWI0MDZhYjcwZTcwYTdkYzVmOTE5YjUKpbVF: 00:16:17.875 21:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTNiYTUzMWY3YzBiODg0MjQzZjM3YTI2NzZhMzY2YTA1ODQ4YjQ0Mjg2YzJlOWNiNTVlM2FlM2FiNDE1ZjU5MXk4f1w=: ]] 00:16:17.875 21:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTNiYTUzMWY3YzBiODg0MjQzZjM3YTI2NzZhMzY2YTA1ODQ4YjQ0Mjg2YzJlOWNiNTVlM2FlM2FiNDE1ZjU5MXk4f1w=: 00:16:17.875 21:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:16:17.875 21:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:17.875 21:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:17.875 21:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:17.875 21:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:17.876 21:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:17.876 21:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:17.876 21:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.876 21:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:17.876 21:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.876 21:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:17.876 21:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:17.876 21:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:17.876 21:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:17.876 21:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:18.134 21:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:18.134 21:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:18.134 21:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:18.134 21:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:18.134 21:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:18.134 21:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:18.134 21:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:18.134 21:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.134 21:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:18.702 nvme0n1 00:16:18.702 21:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.702 21:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:18.702 21:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.702 21:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:18.702 21:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:18.702 21:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.702 21:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:18.702 21:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:18.702 21:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.702 21:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:18.702 21:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.702 21:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:18.702 21:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:16:18.702 21:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:18.702 21:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:18.702 21:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:18.702 21:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:18.702 21:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjFlMmJhY2E4Y2YyZWU3YzcyM2YzODRhODY4ZWQ2Yzg5YzE4YmIxNmJlZjU5NmMyGXebhg==: 00:16:18.702 21:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWRmYzgxZTExZDVkOWJiMjFkNDM2ZGM5MDA3NzM4OGZlMTU5YzY2ZjUzNGQ2YTU5Q3oTbw==: 00:16:18.702 21:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:18.702 21:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:18.702 21:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjFlMmJhY2E4Y2YyZWU3YzcyM2YzODRhODY4ZWQ2Yzg5YzE4YmIxNmJlZjU5NmMyGXebhg==: 00:16:18.702 21:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWRmYzgxZTExZDVkOWJiMjFkNDM2ZGM5MDA3NzM4OGZlMTU5YzY2ZjUzNGQ2YTU5Q3oTbw==: ]] 00:16:18.702 21:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWRmYzgxZTExZDVkOWJiMjFkNDM2ZGM5MDA3NzM4OGZlMTU5YzY2ZjUzNGQ2YTU5Q3oTbw==: 00:16:18.702 21:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:16:18.702 21:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:18.702 21:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:18.702 21:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:18.702 21:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:18.702 21:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:18.702 21:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:18.702 21:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.702 21:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:18.702 21:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.702 21:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:18.702 21:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:18.702 21:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:18.702 21:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:18.702 21:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:18.702 21:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:18.702 21:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:18.702 21:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:18.702 21:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:18.702 21:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:18.702 21:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:18.702 21:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:18.703 21:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.703 21:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:19.268 nvme0n1 00:16:19.268 21:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.268 21:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:19.268 21:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:19.268 21:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.268 21:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:19.268 21:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.268 21:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:19.268 21:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:19.268 21:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.268 21:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:19.268 21:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.268 21:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:19.268 21:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:16:19.268 21:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:19.268 21:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:19.268 21:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:19.268 21:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:19.268 21:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGQ3NjJhNGVjM2I1OGQxNDlmZjY1NjgyNjA5NGI4N2QrqGKh: 00:16:19.268 21:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTJjYmYyZDdhNDgwNmJhNTc3NDM2MmU5MGNlMjExNTL7ioPC: 00:16:19.268 21:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:19.268 21:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:19.268 21:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGQ3NjJhNGVjM2I1OGQxNDlmZjY1NjgyNjA5NGI4N2QrqGKh: 00:16:19.268 21:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTJjYmYyZDdhNDgwNmJhNTc3NDM2MmU5MGNlMjExNTL7ioPC: ]] 00:16:19.268 21:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTJjYmYyZDdhNDgwNmJhNTc3NDM2MmU5MGNlMjExNTL7ioPC: 00:16:19.268 21:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:16:19.268 21:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:19.269 21:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:19.269 21:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:19.269 21:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:19.269 21:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:19.269 21:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:19.269 21:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.269 21:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:19.269 21:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.269 21:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:19.269 21:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:19.269 21:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:19.269 21:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:19.269 21:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:19.269 21:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:19.269 21:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:19.269 21:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:19.269 21:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:19.269 21:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:19.269 21:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:19.269 21:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:19.269 21:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.269 21:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:20.204 nvme0n1 00:16:20.204 21:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.204 21:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:20.204 21:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:20.204 21:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.204 21:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:20.204 21:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.204 21:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:20.204 21:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:20.204 21:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.204 21:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:20.204 21:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.204 21:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:20.204 21:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:16:20.204 21:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:20.204 21:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:20.204 21:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:20.204 21:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:20.204 21:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDA5M2YzY2E5OTE4MWRkYjE0NGNmMWYwNmM5ZTYwZWVkN2MxZDM4MzFhYzYyMjY2kopZJA==: 00:16:20.204 21:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTQyYWQ2MzE5Mzk2ZTNiNGU0MGIwYTVlNTE5OWYwZGRSoKyV: 00:16:20.204 21:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:20.204 21:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:20.204 21:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDA5M2YzY2E5OTE4MWRkYjE0NGNmMWYwNmM5ZTYwZWVkN2MxZDM4MzFhYzYyMjY2kopZJA==: 00:16:20.204 21:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTQyYWQ2MzE5Mzk2ZTNiNGU0MGIwYTVlNTE5OWYwZGRSoKyV: ]] 00:16:20.204 21:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTQyYWQ2MzE5Mzk2ZTNiNGU0MGIwYTVlNTE5OWYwZGRSoKyV: 00:16:20.204 21:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:16:20.204 21:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:20.204 21:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:20.204 21:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:20.204 21:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:20.204 21:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:20.204 21:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:20.204 21:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.204 21:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:20.204 21:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.204 21:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:20.204 21:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:20.204 21:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:20.204 21:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:20.204 21:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:20.204 21:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:20.204 21:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:20.204 21:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:20.204 21:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:20.204 21:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:20.204 21:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:20.204 21:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:20.204 21:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.204 21:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:20.771 nvme0n1 00:16:20.771 21:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.771 21:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:20.771 21:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.771 21:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:20.771 21:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:20.771 21:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.771 21:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:20.771 21:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:20.771 21:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.771 21:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:20.771 21:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.771 21:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:20.771 21:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:16:20.771 21:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:20.771 21:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:20.771 21:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:20.771 21:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:20.771 21:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGUyYWRlMmRiZWVhMGM5NGJlNmEzYjZiZGNhMGQ4YjBmYmM2NWE2OWRjNjlhMDBjNzg3OTA1ZTY3ZTc4Y2RmYjiCprw=: 00:16:20.771 21:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:20.771 21:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:20.771 21:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:20.771 21:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGUyYWRlMmRiZWVhMGM5NGJlNmEzYjZiZGNhMGQ4YjBmYmM2NWE2OWRjNjlhMDBjNzg3OTA1ZTY3ZTc4Y2RmYjiCprw=: 00:16:20.771 21:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:20.771 21:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:16:20.771 21:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:20.771 21:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:20.771 21:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:20.771 21:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:20.771 21:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:20.771 21:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:20.771 21:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.771 21:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:20.771 21:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.771 21:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:20.771 21:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:20.771 21:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:20.771 21:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:20.771 21:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:20.771 21:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:20.771 21:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:20.771 21:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:20.771 21:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:20.771 21:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:20.771 21:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:20.771 21:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:20.771 21:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.771 21:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:21.338 nvme0n1 00:16:21.339 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.339 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:21.339 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:21.339 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.339 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:21.339 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.339 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:21.339 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:21.339 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.339 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:21.339 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.339 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:16:21.339 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:21.339 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:21.339 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:21.339 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:21.339 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjFlMmJhY2E4Y2YyZWU3YzcyM2YzODRhODY4ZWQ2Yzg5YzE4YmIxNmJlZjU5NmMyGXebhg==: 00:16:21.339 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWRmYzgxZTExZDVkOWJiMjFkNDM2ZGM5MDA3NzM4OGZlMTU5YzY2ZjUzNGQ2YTU5Q3oTbw==: 00:16:21.339 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:21.339 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:21.339 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjFlMmJhY2E4Y2YyZWU3YzcyM2YzODRhODY4ZWQ2Yzg5YzE4YmIxNmJlZjU5NmMyGXebhg==: 00:16:21.339 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWRmYzgxZTExZDVkOWJiMjFkNDM2ZGM5MDA3NzM4OGZlMTU5YzY2ZjUzNGQ2YTU5Q3oTbw==: ]] 00:16:21.339 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWRmYzgxZTExZDVkOWJiMjFkNDM2ZGM5MDA3NzM4OGZlMTU5YzY2ZjUzNGQ2YTU5Q3oTbw==: 00:16:21.339 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:21.339 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.339 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:21.339 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.339 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:16:21.339 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:21.339 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:21.339 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:21.339 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:21.339 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:21.339 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:21.339 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:21.339 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:21.339 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:21.339 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:21.339 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:16:21.339 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:16:21.339 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:16:21.339 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:21.339 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:21.339 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:21.339 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:21.339 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:16:21.339 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.339 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:21.606 request: 00:16:21.606 { 00:16:21.606 "name": "nvme0", 00:16:21.606 "trtype": "tcp", 00:16:21.606 "traddr": "10.0.0.1", 00:16:21.606 "adrfam": "ipv4", 00:16:21.606 "trsvcid": "4420", 00:16:21.606 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:16:21.606 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:16:21.606 "prchk_reftag": false, 00:16:21.606 "prchk_guard": false, 00:16:21.606 "hdgst": false, 00:16:21.606 "ddgst": false, 00:16:21.606 "method": "bdev_nvme_attach_controller", 00:16:21.606 "req_id": 1 00:16:21.606 } 00:16:21.606 Got JSON-RPC error response 00:16:21.606 response: 00:16:21.606 { 00:16:21.606 "code": -5, 00:16:21.606 "message": "Input/output error" 00:16:21.606 } 00:16:21.607 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:21.607 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:16:21.607 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:21.607 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:21.607 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:21.607 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:16:21.607 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:16:21.607 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.607 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:21.607 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.607 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:16:21.607 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:16:21.607 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:21.607 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:21.607 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:21.607 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:21.607 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:21.607 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:21.607 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:21.607 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:21.607 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:21.607 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:21.607 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:16:21.607 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:16:21.607 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:16:21.607 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:21.607 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:21.607 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:21.607 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:21.607 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:16:21.607 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.607 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:21.607 request: 00:16:21.607 { 00:16:21.607 "name": "nvme0", 00:16:21.607 "trtype": "tcp", 00:16:21.607 "traddr": "10.0.0.1", 00:16:21.607 "adrfam": "ipv4", 00:16:21.607 "trsvcid": "4420", 00:16:21.607 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:16:21.607 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:16:21.607 "prchk_reftag": false, 00:16:21.607 "prchk_guard": false, 00:16:21.607 "hdgst": false, 00:16:21.607 "ddgst": false, 00:16:21.607 "dhchap_key": "key2", 00:16:21.607 "method": "bdev_nvme_attach_controller", 00:16:21.607 "req_id": 1 00:16:21.607 } 00:16:21.607 Got JSON-RPC error response 00:16:21.607 response: 00:16:21.607 { 00:16:21.607 "code": -5, 00:16:21.607 "message": "Input/output error" 00:16:21.607 } 00:16:21.607 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:21.607 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:16:21.607 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:21.607 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:21.607 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:21.607 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:16:21.607 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.607 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:21.607 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:16:21.607 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.607 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:16:21.607 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:16:21.607 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:21.607 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:21.607 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:21.607 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:21.607 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:21.607 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:21.607 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:21.607 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:21.607 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:21.607 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:21.607 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:21.607 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:16:21.607 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:21.607 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:21.607 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:21.607 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:21.607 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:21.607 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:21.607 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.607 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:21.607 request: 00:16:21.607 { 00:16:21.607 "name": "nvme0", 00:16:21.607 "trtype": "tcp", 00:16:21.607 "traddr": "10.0.0.1", 00:16:21.607 "adrfam": "ipv4", 00:16:21.607 "trsvcid": "4420", 00:16:21.607 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:16:21.607 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:16:21.607 "prchk_reftag": false, 00:16:21.607 "prchk_guard": false, 00:16:21.607 "hdgst": false, 00:16:21.607 "ddgst": false, 00:16:21.607 "dhchap_key": "key1", 00:16:21.607 "dhchap_ctrlr_key": "ckey2", 00:16:21.607 "method": "bdev_nvme_attach_controller", 00:16:21.607 "req_id": 1 00:16:21.607 } 00:16:21.607 Got JSON-RPC error response 00:16:21.607 response: 00:16:21.607 { 00:16:21.607 "code": -5, 00:16:21.607 "message": "Input/output error" 00:16:21.607 } 00:16:21.607 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:21.607 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:16:21.607 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:21.607 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:21.607 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:21.607 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:16:21.607 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:16:21.607 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:16:21.607 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:21.607 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:16:21.607 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:21.607 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:16:21.607 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:21.607 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:21.607 rmmod nvme_tcp 00:16:21.607 rmmod nvme_fabrics 00:16:21.883 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:21.883 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:16:21.883 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:16:21.883 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 77522 ']' 00:16:21.883 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 77522 00:16:21.883 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@950 -- # '[' -z 77522 ']' 00:16:21.883 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # kill -0 77522 00:16:21.883 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # uname 00:16:21.883 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:21.883 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77522 00:16:21.883 killing process with pid 77522 00:16:21.883 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:21.883 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:21.883 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77522' 00:16:21.883 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@969 -- # kill 77522 00:16:21.883 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@974 -- # wait 77522 00:16:21.883 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:21.883 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:21.883 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:21.883 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:21.883 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:21.883 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:21.883 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:21.883 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:21.883 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:22.141 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:16:22.141 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:16:22.141 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:16:22.141 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:16:22.141 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:16:22.141 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:16:22.141 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:16:22.141 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:16:22.141 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:16:22.141 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:16:22.141 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:16:22.141 21:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:22.708 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:22.967 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:16:22.967 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:16:22.967 21:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.jzZ /tmp/spdk.key-null.DKO /tmp/spdk.key-sha256.WtH /tmp/spdk.key-sha384.5Wq /tmp/spdk.key-sha512.DBv /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:16:22.967 21:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:23.225 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:23.225 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:23.225 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:23.485 00:16:23.485 real 0m36.010s 00:16:23.485 user 0m32.477s 00:16:23.485 sys 0m3.949s 00:16:23.485 ************************************ 00:16:23.485 END TEST nvmf_auth_host 00:16:23.485 ************************************ 00:16:23.485 21:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:23.485 21:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:23.485 21:38:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:16:23.485 21:38:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:16:23.485 21:38:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:23.485 21:38:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:23.485 21:38:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:23.485 ************************************ 00:16:23.485 START TEST nvmf_digest 00:16:23.485 ************************************ 00:16:23.485 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:16:23.485 * Looking for test storage... 00:16:23.485 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:23.485 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:23.485 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:16:23.485 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:23.485 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:23.485 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:23.485 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:23.485 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:23.485 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:23.485 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:23.485 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:23.485 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:23.485 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:23.485 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:16:23.485 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:16:23.485 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:23.485 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:23.485 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:23.485 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:23.485 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:23.485 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:23.485 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:23.485 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:23.485 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.485 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.485 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.485 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:16:23.485 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.485 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:16:23.485 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:23.485 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:23.485 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:23.485 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:23.485 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:23.485 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:23.485 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:23.485 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:23.485 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:16:23.485 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:16:23.485 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:16:23.485 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:16:23.485 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:16:23.485 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:23.485 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:23.485 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:23.485 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:23.485 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:23.485 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:23.485 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:23.485 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:23.485 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:23.485 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:23.485 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:23.485 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:23.485 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:23.485 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:23.485 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:23.485 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:23.486 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:23.486 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:23.486 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:23.486 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:23.486 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:23.486 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:23.486 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:23.486 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:23.486 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:23.486 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:23.486 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:23.486 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:23.486 Cannot find device "nvmf_tgt_br" 00:16:23.486 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@155 -- # true 00:16:23.486 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:23.486 Cannot find device "nvmf_tgt_br2" 00:16:23.486 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # true 00:16:23.486 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:23.486 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:23.745 Cannot find device "nvmf_tgt_br" 00:16:23.745 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@158 -- # true 00:16:23.745 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:23.745 Cannot find device "nvmf_tgt_br2" 00:16:23.745 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # true 00:16:23.745 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:23.745 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:23.745 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:23.745 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:23.745 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # true 00:16:23.745 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:23.745 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:23.745 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # true 00:16:23.745 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:23.745 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:23.745 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:23.745 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:23.745 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:23.745 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:23.745 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:23.745 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:23.745 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:23.745 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:23.745 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:23.745 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:23.745 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:23.745 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:23.745 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:23.745 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:23.745 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:23.745 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:23.745 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:23.745 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:23.745 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:23.745 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:23.745 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:23.745 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:24.005 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:24.005 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.087 ms 00:16:24.005 00:16:24.005 --- 10.0.0.2 ping statistics --- 00:16:24.005 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:24.005 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:16:24.005 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:24.005 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:24.005 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:16:24.005 00:16:24.005 --- 10.0.0.3 ping statistics --- 00:16:24.005 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:24.005 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:16:24.005 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:24.005 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:24.005 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:16:24.005 00:16:24.005 --- 10.0.0.1 ping statistics --- 00:16:24.005 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:24.005 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:16:24.005 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:24.005 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@433 -- # return 0 00:16:24.005 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:24.005 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:24.005 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:24.005 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:24.005 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:24.005 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:24.005 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:24.005 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:16:24.005 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:16:24.005 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:16:24.005 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:16:24.005 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:24.005 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:16:24.005 ************************************ 00:16:24.005 START TEST nvmf_digest_clean 00:16:24.005 ************************************ 00:16:24.005 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1125 -- # run_digest 00:16:24.005 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:16:24.005 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:16:24.005 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:16:24.005 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:16:24.005 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:16:24.005 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:24.005 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:24.005 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:16:24.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:24.005 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=79112 00:16:24.005 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 79112 00:16:24.005 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 79112 ']' 00:16:24.005 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:24.005 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:16:24.005 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:24.005 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:24.005 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:24.005 21:38:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:16:24.005 [2024-07-24 21:38:08.852499] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:16:24.005 [2024-07-24 21:38:08.852592] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:24.005 [2024-07-24 21:38:08.994466] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:24.264 [2024-07-24 21:38:09.149063] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:24.264 [2024-07-24 21:38:09.149349] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:24.264 [2024-07-24 21:38:09.149604] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:24.264 [2024-07-24 21:38:09.149866] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:24.264 [2024-07-24 21:38:09.150098] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:24.264 [2024-07-24 21:38:09.150163] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:25.199 21:38:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:25.199 21:38:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:16:25.199 21:38:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:25.199 21:38:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:25.199 21:38:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:16:25.199 21:38:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:25.199 21:38:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:16:25.199 21:38:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:16:25.199 21:38:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:16:25.199 21:38:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.199 21:38:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:16:25.199 [2024-07-24 21:38:09.959899] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:25.199 null0 00:16:25.199 [2024-07-24 21:38:10.010934] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:25.199 [2024-07-24 21:38:10.035100] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:25.199 21:38:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.199 21:38:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:16:25.199 21:38:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:16:25.199 21:38:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:16:25.199 21:38:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:16:25.199 21:38:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:16:25.199 21:38:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:16:25.199 21:38:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:16:25.199 21:38:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=79144 00:16:25.199 21:38:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:16:25.199 21:38:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 79144 /var/tmp/bperf.sock 00:16:25.199 21:38:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 79144 ']' 00:16:25.199 21:38:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:25.199 21:38:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:25.199 21:38:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:25.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:25.199 21:38:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:25.199 21:38:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:16:25.199 [2024-07-24 21:38:10.100242] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:16:25.200 [2024-07-24 21:38:10.100839] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79144 ] 00:16:25.458 [2024-07-24 21:38:10.239506] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:25.458 [2024-07-24 21:38:10.400293] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:26.392 21:38:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:26.392 21:38:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:16:26.392 21:38:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:16:26.392 21:38:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:16:26.392 21:38:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:16:26.392 [2024-07-24 21:38:11.378702] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:26.650 21:38:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:26.650 21:38:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:26.909 nvme0n1 00:16:26.909 21:38:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:16:26.909 21:38:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:26.909 Running I/O for 2 seconds... 00:16:29.438 00:16:29.438 Latency(us) 00:16:29.438 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:29.438 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:16:29.438 nvme0n1 : 2.01 15962.34 62.35 0.00 0.00 8013.33 7000.44 21448.15 00:16:29.438 =================================================================================================================== 00:16:29.438 Total : 15962.34 62.35 0.00 0.00 8013.33 7000.44 21448.15 00:16:29.438 0 00:16:29.438 21:38:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:16:29.438 21:38:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:16:29.438 21:38:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:16:29.438 21:38:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:16:29.438 | select(.opcode=="crc32c") 00:16:29.438 | "\(.module_name) \(.executed)"' 00:16:29.438 21:38:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:16:29.438 21:38:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:16:29.438 21:38:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:16:29.438 21:38:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:16:29.438 21:38:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:29.438 21:38:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 79144 00:16:29.438 21:38:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 79144 ']' 00:16:29.438 21:38:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 79144 00:16:29.438 21:38:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:16:29.438 21:38:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:29.438 21:38:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79144 00:16:29.438 killing process with pid 79144 00:16:29.438 Received shutdown signal, test time was about 2.000000 seconds 00:16:29.438 00:16:29.438 Latency(us) 00:16:29.438 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:29.438 =================================================================================================================== 00:16:29.438 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:29.438 21:38:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:16:29.438 21:38:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:16:29.438 21:38:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79144' 00:16:29.438 21:38:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 79144 00:16:29.438 21:38:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 79144 00:16:29.696 21:38:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:16:29.696 21:38:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:16:29.696 21:38:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:16:29.696 21:38:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:16:29.696 21:38:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:16:29.696 21:38:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:16:29.696 21:38:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:16:29.696 21:38:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:16:29.696 21:38:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=79205 00:16:29.696 21:38:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 79205 /var/tmp/bperf.sock 00:16:29.696 21:38:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 79205 ']' 00:16:29.696 21:38:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:29.696 21:38:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:29.696 21:38:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:29.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:29.697 21:38:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:29.697 21:38:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:16:29.697 [2024-07-24 21:38:14.569249] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:16:29.697 [2024-07-24 21:38:14.569528] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79205 ] 00:16:29.697 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:29.697 Zero copy mechanism will not be used. 00:16:29.955 [2024-07-24 21:38:14.706230] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:29.955 [2024-07-24 21:38:14.857395] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:30.904 21:38:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:30.904 21:38:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:16:30.904 21:38:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:16:30.904 21:38:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:16:30.904 21:38:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:16:30.904 [2024-07-24 21:38:15.865519] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:31.203 21:38:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:31.203 21:38:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:31.467 nvme0n1 00:16:31.467 21:38:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:16:31.467 21:38:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:31.467 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:31.467 Zero copy mechanism will not be used. 00:16:31.467 Running I/O for 2 seconds... 00:16:33.374 00:16:33.374 Latency(us) 00:16:33.374 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:33.374 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:16:33.374 nvme0n1 : 2.00 6747.54 843.44 0.00 0.00 2367.91 2129.92 10902.81 00:16:33.374 =================================================================================================================== 00:16:33.374 Total : 6747.54 843.44 0.00 0.00 2367.91 2129.92 10902.81 00:16:33.374 0 00:16:33.374 21:38:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:16:33.374 21:38:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:16:33.374 21:38:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:16:33.374 21:38:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:16:33.374 21:38:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:16:33.374 | select(.opcode=="crc32c") 00:16:33.374 | "\(.module_name) \(.executed)"' 00:16:33.632 21:38:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:16:33.633 21:38:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:16:33.633 21:38:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:16:33.633 21:38:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:33.633 21:38:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 79205 00:16:33.633 21:38:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 79205 ']' 00:16:33.633 21:38:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 79205 00:16:33.633 21:38:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:16:33.633 21:38:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:33.633 21:38:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79205 00:16:33.892 killing process with pid 79205 00:16:33.892 Received shutdown signal, test time was about 2.000000 seconds 00:16:33.892 00:16:33.892 Latency(us) 00:16:33.892 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:33.892 =================================================================================================================== 00:16:33.892 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:33.892 21:38:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:16:33.892 21:38:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:16:33.892 21:38:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79205' 00:16:33.892 21:38:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 79205 00:16:33.892 21:38:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 79205 00:16:34.151 21:38:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:16:34.151 21:38:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:16:34.151 21:38:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:16:34.151 21:38:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:16:34.151 21:38:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:16:34.151 21:38:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:16:34.151 21:38:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:16:34.151 21:38:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=79271 00:16:34.151 21:38:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:16:34.151 21:38:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 79271 /var/tmp/bperf.sock 00:16:34.151 21:38:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 79271 ']' 00:16:34.151 21:38:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:34.151 21:38:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:34.151 21:38:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:34.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:34.151 21:38:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:34.151 21:38:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:16:34.151 [2024-07-24 21:38:19.012525] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:16:34.151 [2024-07-24 21:38:19.012935] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79271 ] 00:16:34.410 [2024-07-24 21:38:19.155942] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:34.410 [2024-07-24 21:38:19.236258] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:34.978 21:38:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:34.978 21:38:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:16:34.978 21:38:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:16:34.978 21:38:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:16:34.978 21:38:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:16:35.563 [2024-07-24 21:38:20.239400] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:35.563 21:38:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:35.563 21:38:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:35.822 nvme0n1 00:16:35.822 21:38:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:16:35.822 21:38:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:35.822 Running I/O for 2 seconds... 00:16:38.352 00:16:38.352 Latency(us) 00:16:38.352 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:38.352 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:38.352 nvme0n1 : 2.00 16495.78 64.44 0.00 0.00 7752.41 6494.02 15490.33 00:16:38.352 =================================================================================================================== 00:16:38.352 Total : 16495.78 64.44 0.00 0.00 7752.41 6494.02 15490.33 00:16:38.352 0 00:16:38.352 21:38:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:16:38.352 21:38:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:16:38.352 21:38:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:16:38.352 21:38:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:16:38.352 21:38:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:16:38.352 | select(.opcode=="crc32c") 00:16:38.352 | "\(.module_name) \(.executed)"' 00:16:38.352 21:38:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:16:38.352 21:38:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:16:38.352 21:38:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:16:38.352 21:38:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:38.352 21:38:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 79271 00:16:38.352 21:38:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 79271 ']' 00:16:38.352 21:38:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 79271 00:16:38.352 21:38:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:16:38.352 21:38:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:38.352 21:38:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79271 00:16:38.352 killing process with pid 79271 00:16:38.352 Received shutdown signal, test time was about 2.000000 seconds 00:16:38.352 00:16:38.352 Latency(us) 00:16:38.352 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:38.352 =================================================================================================================== 00:16:38.352 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:38.352 21:38:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:16:38.352 21:38:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:16:38.352 21:38:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79271' 00:16:38.352 21:38:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 79271 00:16:38.352 21:38:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 79271 00:16:38.352 21:38:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:16:38.352 21:38:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:16:38.352 21:38:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:16:38.353 21:38:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:16:38.353 21:38:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:16:38.353 21:38:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:16:38.353 21:38:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:16:38.353 21:38:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=79326 00:16:38.353 21:38:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 79326 /var/tmp/bperf.sock 00:16:38.353 21:38:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:16:38.353 21:38:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 79326 ']' 00:16:38.353 21:38:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:38.353 21:38:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:38.353 21:38:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:38.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:38.353 21:38:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:38.353 21:38:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:16:38.610 [2024-07-24 21:38:23.390551] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:16:38.611 [2024-07-24 21:38:23.391132] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79326 ] 00:16:38.611 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:38.611 Zero copy mechanism will not be used. 00:16:38.611 [2024-07-24 21:38:23.529778] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:38.869 [2024-07-24 21:38:23.643576] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:39.435 21:38:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:39.435 21:38:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:16:39.435 21:38:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:16:39.435 21:38:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:16:39.436 21:38:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:16:39.694 [2024-07-24 21:38:24.609757] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:39.694 21:38:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:39.694 21:38:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:40.262 nvme0n1 00:16:40.262 21:38:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:16:40.262 21:38:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:40.262 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:40.262 Zero copy mechanism will not be used. 00:16:40.262 Running I/O for 2 seconds... 00:16:42.225 00:16:42.225 Latency(us) 00:16:42.225 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:42.225 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:16:42.225 nvme0n1 : 2.00 5653.82 706.73 0.00 0.00 2824.15 2055.45 10783.65 00:16:42.225 =================================================================================================================== 00:16:42.225 Total : 5653.82 706.73 0.00 0.00 2824.15 2055.45 10783.65 00:16:42.225 0 00:16:42.225 21:38:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:16:42.225 21:38:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:16:42.225 21:38:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:16:42.225 21:38:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:16:42.225 | select(.opcode=="crc32c") 00:16:42.225 | "\(.module_name) \(.executed)"' 00:16:42.225 21:38:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:16:42.484 21:38:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:16:42.484 21:38:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:16:42.484 21:38:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:16:42.484 21:38:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:42.484 21:38:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 79326 00:16:42.484 21:38:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 79326 ']' 00:16:42.484 21:38:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 79326 00:16:42.484 21:38:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:16:42.484 21:38:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:42.484 21:38:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79326 00:16:42.484 killing process with pid 79326 00:16:42.484 Received shutdown signal, test time was about 2.000000 seconds 00:16:42.484 00:16:42.484 Latency(us) 00:16:42.484 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:42.484 =================================================================================================================== 00:16:42.484 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:42.484 21:38:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:16:42.484 21:38:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:16:42.484 21:38:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79326' 00:16:42.484 21:38:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 79326 00:16:42.484 21:38:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 79326 00:16:42.742 21:38:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 79112 00:16:42.742 21:38:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 79112 ']' 00:16:42.742 21:38:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 79112 00:16:42.743 21:38:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:16:43.001 21:38:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:43.001 21:38:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79112 00:16:43.001 killing process with pid 79112 00:16:43.001 21:38:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:43.001 21:38:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:43.001 21:38:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79112' 00:16:43.001 21:38:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 79112 00:16:43.001 21:38:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 79112 00:16:43.001 00:16:43.001 real 0m19.206s 00:16:43.001 user 0m36.171s 00:16:43.001 sys 0m5.625s 00:16:43.001 21:38:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:43.001 ************************************ 00:16:43.001 21:38:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:16:43.001 END TEST nvmf_digest_clean 00:16:43.001 ************************************ 00:16:43.259 21:38:28 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:16:43.259 21:38:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:16:43.259 21:38:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:43.259 21:38:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:16:43.259 ************************************ 00:16:43.259 START TEST nvmf_digest_error 00:16:43.259 ************************************ 00:16:43.259 21:38:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1125 -- # run_digest_error 00:16:43.259 21:38:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:16:43.259 21:38:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:43.259 21:38:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:43.259 21:38:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:43.259 21:38:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=79415 00:16:43.259 21:38:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 79415 00:16:43.259 21:38:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:16:43.259 21:38:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 79415 ']' 00:16:43.259 21:38:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:43.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:43.259 21:38:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:43.260 21:38:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:43.260 21:38:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:43.260 21:38:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:43.260 [2024-07-24 21:38:28.115928] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:16:43.260 [2024-07-24 21:38:28.116031] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:43.260 [2024-07-24 21:38:28.256427] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:43.518 [2024-07-24 21:38:28.376086] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:43.518 [2024-07-24 21:38:28.376158] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:43.518 [2024-07-24 21:38:28.376186] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:43.518 [2024-07-24 21:38:28.376195] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:43.518 [2024-07-24 21:38:28.376202] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:43.518 [2024-07-24 21:38:28.376230] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:44.454 21:38:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:44.454 21:38:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:16:44.454 21:38:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:44.454 21:38:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:44.454 21:38:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:44.454 21:38:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:44.454 21:38:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:16:44.454 21:38:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.454 21:38:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:44.454 [2024-07-24 21:38:29.156803] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:16:44.454 21:38:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.454 21:38:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:16:44.454 21:38:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:16:44.454 21:38:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.454 21:38:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:44.454 [2024-07-24 21:38:29.221492] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:44.454 null0 00:16:44.454 [2024-07-24 21:38:29.270933] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:44.454 [2024-07-24 21:38:29.295066] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:44.454 21:38:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.454 21:38:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:16:44.454 21:38:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:16:44.454 21:38:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:16:44.454 21:38:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:16:44.454 21:38:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:16:44.454 21:38:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=79447 00:16:44.454 21:38:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 79447 /var/tmp/bperf.sock 00:16:44.454 21:38:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:16:44.454 21:38:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 79447 ']' 00:16:44.454 21:38:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:44.454 21:38:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:44.454 21:38:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:44.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:44.454 21:38:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:44.454 21:38:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:44.454 [2024-07-24 21:38:29.356431] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:16:44.454 [2024-07-24 21:38:29.357031] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79447 ] 00:16:44.712 [2024-07-24 21:38:29.497999] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:44.712 [2024-07-24 21:38:29.661393] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:44.971 [2024-07-24 21:38:29.737674] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:45.538 21:38:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:45.538 21:38:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:16:45.538 21:38:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:45.538 21:38:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:45.796 21:38:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:16:45.796 21:38:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.796 21:38:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:45.796 21:38:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.796 21:38:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:45.796 21:38:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:46.056 nvme0n1 00:16:46.056 21:38:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:16:46.056 21:38:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.056 21:38:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:46.056 21:38:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.056 21:38:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:16:46.056 21:38:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:46.056 Running I/O for 2 seconds... 00:16:46.056 [2024-07-24 21:38:31.019175] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14574f0) 00:16:46.056 [2024-07-24 21:38:31.019268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3619 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.056 [2024-07-24 21:38:31.019288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.056 [2024-07-24 21:38:31.036861] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14574f0) 00:16:46.056 [2024-07-24 21:38:31.036927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5821 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.056 [2024-07-24 21:38:31.036955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.056 [2024-07-24 21:38:31.053961] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14574f0) 00:16:46.056 [2024-07-24 21:38:31.054020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17385 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.056 [2024-07-24 21:38:31.054048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.316 [2024-07-24 21:38:31.071046] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14574f0) 00:16:46.316 [2024-07-24 21:38:31.071111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11298 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.316 [2024-07-24 21:38:31.071129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.316 [2024-07-24 21:38:31.087635] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14574f0) 00:16:46.316 [2024-07-24 21:38:31.087704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18757 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.316 [2024-07-24 21:38:31.087722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.316 [2024-07-24 21:38:31.104931] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14574f0) 00:16:46.316 [2024-07-24 21:38:31.105018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12905 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.316 [2024-07-24 21:38:31.105036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.316 [2024-07-24 21:38:31.122332] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14574f0) 00:16:46.316 [2024-07-24 21:38:31.122397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2534 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.316 [2024-07-24 21:38:31.122420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.316 [2024-07-24 21:38:31.140196] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14574f0) 00:16:46.316 [2024-07-24 21:38:31.140294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.316 [2024-07-24 21:38:31.140320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.316 [2024-07-24 21:38:31.157857] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14574f0) 00:16:46.316 [2024-07-24 21:38:31.157949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:16612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.316 [2024-07-24 21:38:31.157967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.316 [2024-07-24 21:38:31.175121] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14574f0) 00:16:46.316 [2024-07-24 21:38:31.175195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:4721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.316 [2024-07-24 21:38:31.175212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.316 [2024-07-24 21:38:31.192234] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14574f0) 00:16:46.316 [2024-07-24 21:38:31.192288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:16305 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.316 [2024-07-24 21:38:31.192305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.316 [2024-07-24 21:38:31.208826] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14574f0) 00:16:46.316 [2024-07-24 21:38:31.208877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:469 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.316 [2024-07-24 21:38:31.208893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.316 [2024-07-24 21:38:31.225400] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14574f0) 00:16:46.316 [2024-07-24 21:38:31.225457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:20493 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.316 [2024-07-24 21:38:31.225478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.316 [2024-07-24 21:38:31.242150] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14574f0) 00:16:46.316 [2024-07-24 21:38:31.242202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:17729 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.316 [2024-07-24 21:38:31.242230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.316 [2024-07-24 21:38:31.258832] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14574f0) 00:16:46.316 [2024-07-24 21:38:31.258887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:22770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.316 [2024-07-24 21:38:31.258902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.316 [2024-07-24 21:38:31.275848] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14574f0) 00:16:46.316 [2024-07-24 21:38:31.275900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:15284 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.316 [2024-07-24 21:38:31.275926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.316 [2024-07-24 21:38:31.292455] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14574f0) 00:16:46.316 [2024-07-24 21:38:31.292526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:3716 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.316 [2024-07-24 21:38:31.292545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.316 [2024-07-24 21:38:31.309131] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14574f0) 00:16:46.316 [2024-07-24 21:38:31.309204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:3955 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.316 [2024-07-24 21:38:31.309222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.575 [2024-07-24 21:38:31.326691] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14574f0) 00:16:46.575 [2024-07-24 21:38:31.326762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:11521 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.575 [2024-07-24 21:38:31.326779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.576 [2024-07-24 21:38:31.343946] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14574f0) 00:16:46.576 [2024-07-24 21:38:31.344022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:11382 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.576 [2024-07-24 21:38:31.344044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.576 [2024-07-24 21:38:31.361066] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14574f0) 00:16:46.576 [2024-07-24 21:38:31.361143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:5812 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.576 [2024-07-24 21:38:31.361163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.576 [2024-07-24 21:38:31.377952] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14574f0) 00:16:46.576 [2024-07-24 21:38:31.378024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:18900 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.576 [2024-07-24 21:38:31.378045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.576 [2024-07-24 21:38:31.395132] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14574f0) 00:16:46.576 [2024-07-24 21:38:31.395203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:1252 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.576 [2024-07-24 21:38:31.395220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.576 [2024-07-24 21:38:31.412396] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14574f0) 00:16:46.576 [2024-07-24 21:38:31.412459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:13885 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.576 [2024-07-24 21:38:31.412476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.576 [2024-07-24 21:38:31.429498] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14574f0) 00:16:46.576 [2024-07-24 21:38:31.429570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.576 [2024-07-24 21:38:31.429586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.576 [2024-07-24 21:38:31.446449] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14574f0) 00:16:46.576 [2024-07-24 21:38:31.446505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:10079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.576 [2024-07-24 21:38:31.446528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.576 [2024-07-24 21:38:31.463501] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14574f0) 00:16:46.576 [2024-07-24 21:38:31.463568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:8531 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.576 [2024-07-24 21:38:31.463584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.576 [2024-07-24 21:38:31.480678] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14574f0) 00:16:46.576 [2024-07-24 21:38:31.480745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:16478 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.576 [2024-07-24 21:38:31.480772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.576 [2024-07-24 21:38:31.497841] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14574f0) 00:16:46.576 [2024-07-24 21:38:31.497917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:9078 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.576 [2024-07-24 21:38:31.497934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.576 [2024-07-24 21:38:31.514766] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14574f0) 00:16:46.576 [2024-07-24 21:38:31.514853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:19263 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.576 [2024-07-24 21:38:31.514876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.576 [2024-07-24 21:38:31.532375] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14574f0) 00:16:46.576 [2024-07-24 21:38:31.532451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:14519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.576 [2024-07-24 21:38:31.532472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.576 [2024-07-24 21:38:31.548942] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14574f0) 00:16:46.576 [2024-07-24 21:38:31.549013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:3421 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.576 [2024-07-24 21:38:31.549030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.576 [2024-07-24 21:38:31.565529] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14574f0) 00:16:46.576 [2024-07-24 21:38:31.565602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:15444 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.576 [2024-07-24 21:38:31.565645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.835 [2024-07-24 21:38:31.582668] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14574f0) 00:16:46.835 [2024-07-24 21:38:31.582739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:11985 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.835 [2024-07-24 21:38:31.582756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.835 [2024-07-24 21:38:31.599317] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14574f0) 00:16:46.835 [2024-07-24 21:38:31.599430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:24870 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.835 [2024-07-24 21:38:31.599449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.835 [2024-07-24 21:38:31.616035] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14574f0) 00:16:46.835 [2024-07-24 21:38:31.616111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:17644 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.835 [2024-07-24 21:38:31.616136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.835 [2024-07-24 21:38:31.632559] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14574f0) 00:16:46.835 [2024-07-24 21:38:31.632651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:18317 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.836 [2024-07-24 21:38:31.632670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.836 [2024-07-24 21:38:31.649294] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14574f0) 00:16:46.836 [2024-07-24 21:38:31.649368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:8095 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.836 [2024-07-24 21:38:31.649394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.836 [2024-07-24 21:38:31.666321] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14574f0) 00:16:46.836 [2024-07-24 21:38:31.666395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:7783 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.836 [2024-07-24 21:38:31.666412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.836 [2024-07-24 21:38:31.683307] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14574f0) 00:16:46.836 [2024-07-24 21:38:31.683407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:12783 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.836 [2024-07-24 21:38:31.683434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.836 [2024-07-24 21:38:31.700609] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14574f0) 00:16:46.836 [2024-07-24 21:38:31.700698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:15101 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.836 [2024-07-24 21:38:31.700715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.836 [2024-07-24 21:38:31.717467] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14574f0) 00:16:46.836 [2024-07-24 21:38:31.717550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:9573 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.836 [2024-07-24 21:38:31.717581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.836 [2024-07-24 21:38:31.734216] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14574f0) 00:16:46.836 [2024-07-24 21:38:31.734295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:9203 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.836 [2024-07-24 21:38:31.734322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.836 [2024-07-24 21:38:31.750993] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14574f0) 00:16:46.836 [2024-07-24 21:38:31.751104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:3194 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.836 [2024-07-24 21:38:31.751123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.836 [2024-07-24 21:38:31.767919] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14574f0) 00:16:46.836 [2024-07-24 21:38:31.767996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:4919 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.836 [2024-07-24 21:38:31.768023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.836 [2024-07-24 21:38:31.785198] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14574f0) 00:16:46.836 [2024-07-24 21:38:31.785279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:11278 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.836 [2024-07-24 21:38:31.785296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.836 [2024-07-24 21:38:31.802259] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14574f0) 00:16:46.836 [2024-07-24 21:38:31.802345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:10993 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.836 [2024-07-24 21:38:31.802373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.836 [2024-07-24 21:38:31.819082] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14574f0) 00:16:46.836 [2024-07-24 21:38:31.819157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:17359 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.836 [2024-07-24 21:38:31.819175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.836 [2024-07-24 21:38:31.835850] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14574f0) 00:16:46.836 [2024-07-24 21:38:31.835930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:21878 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.836 [2024-07-24 21:38:31.835947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.095 [2024-07-24 21:38:31.853210] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14574f0) 00:16:47.095 [2024-07-24 21:38:31.853274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:25013 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.095 [2024-07-24 21:38:31.853290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.095 [2024-07-24 21:38:31.870711] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14574f0) 00:16:47.095 [2024-07-24 21:38:31.870779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:15415 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.095 [2024-07-24 21:38:31.870796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.095 [2024-07-24 21:38:31.888188] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14574f0) 00:16:47.095 [2024-07-24 21:38:31.888269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:12389 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.095 [2024-07-24 21:38:31.888287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.095 [2024-07-24 21:38:31.905055] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14574f0) 00:16:47.095 [2024-07-24 21:38:31.905116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:4526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.095 [2024-07-24 21:38:31.905133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.095 [2024-07-24 21:38:31.921990] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14574f0) 00:16:47.095 [2024-07-24 21:38:31.922070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:9383 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.095 [2024-07-24 21:38:31.922087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.095 [2024-07-24 21:38:31.938886] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14574f0) 00:16:47.095 [2024-07-24 21:38:31.938952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:6735 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.095 [2024-07-24 21:38:31.938984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.095 [2024-07-24 21:38:31.955924] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14574f0) 00:16:47.095 [2024-07-24 21:38:31.956001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:13788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.095 [2024-07-24 21:38:31.956020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.095 [2024-07-24 21:38:31.972906] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14574f0) 00:16:47.095 [2024-07-24 21:38:31.972985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:21627 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.095 [2024-07-24 21:38:31.973001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.095 [2024-07-24 21:38:31.989942] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14574f0) 00:16:47.095 [2024-07-24 21:38:31.990015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:13664 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.095 [2024-07-24 21:38:31.990040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.095 [2024-07-24 21:38:32.006757] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14574f0) 00:16:47.095 [2024-07-24 21:38:32.006829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:20336 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.095 [2024-07-24 21:38:32.006847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.095 [2024-07-24 21:38:32.024051] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14574f0) 00:16:47.095 [2024-07-24 21:38:32.024131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:3336 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.095 [2024-07-24 21:38:32.024148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.095 [2024-07-24 21:38:32.041751] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14574f0) 00:16:47.095 [2024-07-24 21:38:32.041828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:10543 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.095 [2024-07-24 21:38:32.041845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.095 [2024-07-24 21:38:32.059431] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14574f0) 00:16:47.095 [2024-07-24 21:38:32.059511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:1162 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.095 [2024-07-24 21:38:32.059543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.095 [2024-07-24 21:38:32.076709] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14574f0) 00:16:47.095 [2024-07-24 21:38:32.076784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:4991 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.095 [2024-07-24 21:38:32.076801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.354 [2024-07-24 21:38:32.101026] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14574f0) 00:16:47.354 [2024-07-24 21:38:32.101111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:10734 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.354 [2024-07-24 21:38:32.101129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.354 [2024-07-24 21:38:32.117959] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14574f0) 00:16:47.354 [2024-07-24 21:38:32.118019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:12233 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.354 [2024-07-24 21:38:32.118035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.355 [2024-07-24 21:38:32.135336] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14574f0) 00:16:47.355 [2024-07-24 21:38:32.135396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:9473 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.355 [2024-07-24 21:38:32.135423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.355 [2024-07-24 21:38:32.152583] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14574f0) 00:16:47.355 [2024-07-24 21:38:32.152660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:18965 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.355 [2024-07-24 21:38:32.152677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.355 [2024-07-24 21:38:32.169931] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14574f0) 00:16:47.355 [2024-07-24 21:38:32.169986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:19904 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.355 [2024-07-24 21:38:32.170002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.355 [2024-07-24 21:38:32.187371] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14574f0) 00:16:47.355 [2024-07-24 21:38:32.187443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:19192 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.355 [2024-07-24 21:38:32.187464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.355 [2024-07-24 21:38:32.204121] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14574f0) 00:16:47.355 [2024-07-24 21:38:32.204172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:24016 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.355 [2024-07-24 21:38:32.204188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.355 [2024-07-24 21:38:32.220630] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14574f0) 00:16:47.355 [2024-07-24 21:38:32.220685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:5244 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.355 [2024-07-24 21:38:32.220709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.355 [2024-07-24 21:38:32.237089] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14574f0) 00:16:47.355 [2024-07-24 21:38:32.237159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:227 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.355 [2024-07-24 21:38:32.237187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.355 [2024-07-24 21:38:32.253774] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14574f0) 00:16:47.355 [2024-07-24 21:38:32.253848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:3611 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.355 [2024-07-24 21:38:32.253871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.355 [2024-07-24 21:38:32.270303] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14574f0) 00:16:47.355 [2024-07-24 21:38:32.270373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:23602 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.355 [2024-07-24 21:38:32.270397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.355 [2024-07-24 21:38:32.287006] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14574f0) 00:16:47.355 [2024-07-24 21:38:32.287106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:19368 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.355 [2024-07-24 21:38:32.287123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.355 [2024-07-24 21:38:32.304173] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14574f0) 00:16:47.355 [2024-07-24 21:38:32.304241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:18665 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.355 [2024-07-24 21:38:32.304263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.355 [2024-07-24 21:38:32.320711] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14574f0) 00:16:47.355 [2024-07-24 21:38:32.320766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:3906 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.355 [2024-07-24 21:38:32.320790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.355 [2024-07-24 21:38:32.337156] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14574f0) 00:16:47.355 [2024-07-24 21:38:32.337214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:3477 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.355 [2024-07-24 21:38:32.337242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.355 [2024-07-24 21:38:32.353725] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14574f0) 00:16:47.355 [2024-07-24 21:38:32.353780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:14913 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.355 [2024-07-24 21:38:32.353797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.614 [2024-07-24 21:38:32.370558] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14574f0) 00:16:47.614 [2024-07-24 21:38:32.370614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:24215 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.614 [2024-07-24 21:38:32.370650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.614 [2024-07-24 21:38:32.386994] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14574f0) 00:16:47.614 [2024-07-24 21:38:32.387072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:10541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.614 [2024-07-24 21:38:32.387089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.614 [2024-07-24 21:38:32.403494] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14574f0) 00:16:47.614 [2024-07-24 21:38:32.403564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:1426 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.614 [2024-07-24 21:38:32.403582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.614 [2024-07-24 21:38:32.420003] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14574f0) 00:16:47.614 [2024-07-24 21:38:32.420070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:17057 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.614 [2024-07-24 21:38:32.420087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.615 [2024-07-24 21:38:32.436648] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14574f0) 00:16:47.615 [2024-07-24 21:38:32.436740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:10207 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.615 [2024-07-24 21:38:32.436770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.615 [2024-07-24 21:38:32.453221] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14574f0) 00:16:47.615 [2024-07-24 21:38:32.453284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:21834 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.615 [2024-07-24 21:38:32.453308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.615 [2024-07-24 21:38:32.469761] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14574f0) 00:16:47.615 [2024-07-24 21:38:32.469813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:22518 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.615 [2024-07-24 21:38:32.469836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.615 [2024-07-24 21:38:32.486352] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14574f0) 00:16:47.615 [2024-07-24 21:38:32.486404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:21709 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.615 [2024-07-24 21:38:32.486430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.615 [2024-07-24 21:38:32.502939] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14574f0) 00:16:47.615 [2024-07-24 21:38:32.502986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:18917 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.615 [2024-07-24 21:38:32.503009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.615 [2024-07-24 21:38:32.519480] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14574f0) 00:16:47.615 [2024-07-24 21:38:32.519528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:2491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.615 [2024-07-24 21:38:32.519544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.615 [2024-07-24 21:38:32.535882] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14574f0) 00:16:47.615 [2024-07-24 21:38:32.535934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:6843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.615 [2024-07-24 21:38:32.535960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.615 [2024-07-24 21:38:32.552417] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14574f0) 00:16:47.615 [2024-07-24 21:38:32.552485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:10365 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.615 [2024-07-24 21:38:32.552500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.615 [2024-07-24 21:38:32.569275] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14574f0) 00:16:47.615 [2024-07-24 21:38:32.569333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:7933 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.615 [2024-07-24 21:38:32.569349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.615 [2024-07-24 21:38:32.585778] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14574f0) 00:16:47.615 [2024-07-24 21:38:32.585843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:4941 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.615 [2024-07-24 21:38:32.585864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.615 [2024-07-24 21:38:32.602161] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14574f0) 00:16:47.615 [2024-07-24 21:38:32.602233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:19471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.615 [2024-07-24 21:38:32.602249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.874 [2024-07-24 21:38:32.619453] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14574f0) 00:16:47.874 [2024-07-24 21:38:32.619524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:20250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.875 [2024-07-24 21:38:32.619540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.875 [2024-07-24 21:38:32.636058] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14574f0) 00:16:47.875 [2024-07-24 21:38:32.636130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:20650 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.875 [2024-07-24 21:38:32.636150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.875 [2024-07-24 21:38:32.652678] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14574f0) 00:16:47.875 [2024-07-24 21:38:32.652745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:2944 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.875 [2024-07-24 21:38:32.652761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.875 [2024-07-24 21:38:32.669227] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14574f0) 00:16:47.875 [2024-07-24 21:38:32.669283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:3147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.875 [2024-07-24 21:38:32.669299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.875 [2024-07-24 21:38:32.686404] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14574f0) 00:16:47.875 [2024-07-24 21:38:32.686462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:6585 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.875 [2024-07-24 21:38:32.686480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.875 [2024-07-24 21:38:32.703264] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14574f0) 00:16:47.875 [2024-07-24 21:38:32.703312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:16799 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.875 [2024-07-24 21:38:32.703327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.875 [2024-07-24 21:38:32.719672] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14574f0) 00:16:47.875 [2024-07-24 21:38:32.719720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:24198 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.875 [2024-07-24 21:38:32.719741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.875 [2024-07-24 21:38:32.736309] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14574f0) 00:16:47.875 [2024-07-24 21:38:32.736373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:1019 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.875 [2024-07-24 21:38:32.736398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.875 [2024-07-24 21:38:32.752748] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14574f0) 00:16:47.875 [2024-07-24 21:38:32.752802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:24355 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.875 [2024-07-24 21:38:32.752818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.875 [2024-07-24 21:38:32.768754] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14574f0) 00:16:47.875 [2024-07-24 21:38:32.768811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:5058 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.875 [2024-07-24 21:38:32.768826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.875 [2024-07-24 21:38:32.784997] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14574f0) 00:16:47.875 [2024-07-24 21:38:32.785054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:18747 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.875 [2024-07-24 21:38:32.785070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.875 [2024-07-24 21:38:32.801292] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14574f0) 00:16:47.875 [2024-07-24 21:38:32.801346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21980 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.875 [2024-07-24 21:38:32.801366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.875 [2024-07-24 21:38:32.818213] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14574f0) 00:16:47.875 [2024-07-24 21:38:32.818276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:22991 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.875 [2024-07-24 21:38:32.818292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.875 [2024-07-24 21:38:32.834863] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14574f0) 00:16:47.875 [2024-07-24 21:38:32.834940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:16931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.875 [2024-07-24 21:38:32.834959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.875 [2024-07-24 21:38:32.851376] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14574f0) 00:16:47.875 [2024-07-24 21:38:32.851459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:24368 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.875 [2024-07-24 21:38:32.851475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.875 [2024-07-24 21:38:32.867823] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14574f0) 00:16:47.875 [2024-07-24 21:38:32.867893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:4311 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.875 [2024-07-24 21:38:32.867910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.135 [2024-07-24 21:38:32.884863] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14574f0) 00:16:48.135 [2024-07-24 21:38:32.884929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20966 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.135 [2024-07-24 21:38:32.884949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.135 [2024-07-24 21:38:32.901382] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14574f0) 00:16:48.135 [2024-07-24 21:38:32.901450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:22280 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.135 [2024-07-24 21:38:32.901466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.135 [2024-07-24 21:38:32.917839] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14574f0) 00:16:48.135 [2024-07-24 21:38:32.917905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:6309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.135 [2024-07-24 21:38:32.917926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.135 [2024-07-24 21:38:32.934121] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14574f0) 00:16:48.135 [2024-07-24 21:38:32.934193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:13834 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.135 [2024-07-24 21:38:32.934214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.135 [2024-07-24 21:38:32.950582] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14574f0) 00:16:48.135 [2024-07-24 21:38:32.950674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:25405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.135 [2024-07-24 21:38:32.950690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.135 [2024-07-24 21:38:32.967323] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14574f0) 00:16:48.135 [2024-07-24 21:38:32.967406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:20963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.135 [2024-07-24 21:38:32.967422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.135 [2024-07-24 21:38:32.983551] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14574f0) 00:16:48.135 [2024-07-24 21:38:32.983605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:16521 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.135 [2024-07-24 21:38:32.983635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.135 [2024-07-24 21:38:32.999172] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14574f0) 00:16:48.135 [2024-07-24 21:38:32.999230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:20159 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.135 [2024-07-24 21:38:32.999246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.135 00:16:48.135 Latency(us) 00:16:48.135 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:48.135 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:16:48.135 nvme0n1 : 2.01 15004.38 58.61 0.00 0.00 8523.93 7506.85 32887.16 00:16:48.135 =================================================================================================================== 00:16:48.135 Total : 15004.38 58.61 0.00 0.00 8523.93 7506.85 32887.16 00:16:48.135 0 00:16:48.135 21:38:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:16:48.135 21:38:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:16:48.135 21:38:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:16:48.135 | .driver_specific 00:16:48.135 | .nvme_error 00:16:48.135 | .status_code 00:16:48.135 | .command_transient_transport_error' 00:16:48.135 21:38:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:16:48.395 21:38:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 118 > 0 )) 00:16:48.395 21:38:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 79447 00:16:48.395 21:38:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 79447 ']' 00:16:48.395 21:38:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 79447 00:16:48.395 21:38:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:16:48.395 21:38:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:48.395 21:38:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79447 00:16:48.395 killing process with pid 79447 00:16:48.395 Received shutdown signal, test time was about 2.000000 seconds 00:16:48.395 00:16:48.395 Latency(us) 00:16:48.395 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:48.395 =================================================================================================================== 00:16:48.395 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:48.395 21:38:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:16:48.395 21:38:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:16:48.395 21:38:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79447' 00:16:48.395 21:38:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 79447 00:16:48.395 21:38:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 79447 00:16:48.652 21:38:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:16:48.652 21:38:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:16:48.652 21:38:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:16:48.652 21:38:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:16:48.652 21:38:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:16:48.652 21:38:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=79507 00:16:48.652 21:38:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 79507 /var/tmp/bperf.sock 00:16:48.652 21:38:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:16:48.652 21:38:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 79507 ']' 00:16:48.652 21:38:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:48.652 21:38:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:48.652 21:38:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:48.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:48.652 21:38:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:48.652 21:38:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:48.910 [2024-07-24 21:38:33.682432] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:16:48.910 [2024-07-24 21:38:33.682928] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79507 ] 00:16:48.910 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:48.910 Zero copy mechanism will not be used. 00:16:48.910 [2024-07-24 21:38:33.823528] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:49.168 [2024-07-24 21:38:33.933476] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:49.168 [2024-07-24 21:38:34.003711] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:49.746 21:38:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:49.746 21:38:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:16:49.746 21:38:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:49.746 21:38:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:50.005 21:38:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:16:50.005 21:38:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.005 21:38:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:50.005 21:38:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.005 21:38:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:50.005 21:38:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:50.264 nvme0n1 00:16:50.264 21:38:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:16:50.264 21:38:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.264 21:38:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:50.264 21:38:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.264 21:38:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:16:50.264 21:38:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:50.525 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:50.525 Zero copy mechanism will not be used. 00:16:50.525 Running I/O for 2 seconds... 00:16:50.525 [2024-07-24 21:38:35.283347] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:50.525 [2024-07-24 21:38:35.283450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.525 [2024-07-24 21:38:35.283466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:50.525 [2024-07-24 21:38:35.288243] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:50.525 [2024-07-24 21:38:35.288277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.525 [2024-07-24 21:38:35.288297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:50.525 [2024-07-24 21:38:35.292735] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:50.525 [2024-07-24 21:38:35.292768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.525 [2024-07-24 21:38:35.292787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:50.525 [2024-07-24 21:38:35.297099] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:50.525 [2024-07-24 21:38:35.297134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.525 [2024-07-24 21:38:35.297146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:50.525 [2024-07-24 21:38:35.301437] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:50.525 [2024-07-24 21:38:35.301470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.525 [2024-07-24 21:38:35.301490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:50.525 [2024-07-24 21:38:35.305831] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:50.525 [2024-07-24 21:38:35.305865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.525 [2024-07-24 21:38:35.305885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:50.525 [2024-07-24 21:38:35.310331] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:50.525 [2024-07-24 21:38:35.310364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.525 [2024-07-24 21:38:35.310384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:50.525 [2024-07-24 21:38:35.314715] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:50.525 [2024-07-24 21:38:35.314747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.525 [2024-07-24 21:38:35.314767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:50.525 [2024-07-24 21:38:35.319197] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:50.525 [2024-07-24 21:38:35.319231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.525 [2024-07-24 21:38:35.319243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:50.525 [2024-07-24 21:38:35.323923] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:50.525 [2024-07-24 21:38:35.323959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.525 [2024-07-24 21:38:35.323971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:50.525 [2024-07-24 21:38:35.328700] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:50.525 [2024-07-24 21:38:35.328733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.525 [2024-07-24 21:38:35.328753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:50.525 [2024-07-24 21:38:35.333225] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:50.525 [2024-07-24 21:38:35.333258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.525 [2024-07-24 21:38:35.333277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:50.525 [2024-07-24 21:38:35.337569] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:50.525 [2024-07-24 21:38:35.337602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.525 [2024-07-24 21:38:35.337629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:50.525 [2024-07-24 21:38:35.341943] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:50.525 [2024-07-24 21:38:35.341977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.525 [2024-07-24 21:38:35.341989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:50.525 [2024-07-24 21:38:35.346229] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:50.526 [2024-07-24 21:38:35.346262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.526 [2024-07-24 21:38:35.346281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:50.526 [2024-07-24 21:38:35.350727] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:50.526 [2024-07-24 21:38:35.350758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.526 [2024-07-24 21:38:35.350779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:50.526 [2024-07-24 21:38:35.355015] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:50.526 [2024-07-24 21:38:35.355081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.526 [2024-07-24 21:38:35.355094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:50.526 [2024-07-24 21:38:35.359393] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:50.526 [2024-07-24 21:38:35.359437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.526 [2024-07-24 21:38:35.359456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:50.526 [2024-07-24 21:38:35.363809] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:50.526 [2024-07-24 21:38:35.363841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.526 [2024-07-24 21:38:35.363861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:50.526 [2024-07-24 21:38:35.368052] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:50.526 [2024-07-24 21:38:35.368086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.526 [2024-07-24 21:38:35.368098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:50.526 [2024-07-24 21:38:35.372393] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:50.526 [2024-07-24 21:38:35.372425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.526 [2024-07-24 21:38:35.372445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:50.526 [2024-07-24 21:38:35.376884] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:50.526 [2024-07-24 21:38:35.376916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.526 [2024-07-24 21:38:35.376936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:50.526 [2024-07-24 21:38:35.381232] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:50.526 [2024-07-24 21:38:35.381265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.526 [2024-07-24 21:38:35.381285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:50.526 [2024-07-24 21:38:35.385610] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:50.526 [2024-07-24 21:38:35.385651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.526 [2024-07-24 21:38:35.385663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:50.526 [2024-07-24 21:38:35.389885] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:50.526 [2024-07-24 21:38:35.389918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.526 [2024-07-24 21:38:35.389939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:50.526 [2024-07-24 21:38:35.394191] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:50.526 [2024-07-24 21:38:35.394225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.526 [2024-07-24 21:38:35.394243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:50.526 [2024-07-24 21:38:35.398547] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:50.526 [2024-07-24 21:38:35.398580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.526 [2024-07-24 21:38:35.398599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:50.526 [2024-07-24 21:38:35.402916] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:50.526 [2024-07-24 21:38:35.402948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.526 [2024-07-24 21:38:35.402969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:50.526 [2024-07-24 21:38:35.407356] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:50.526 [2024-07-24 21:38:35.407399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.526 [2024-07-24 21:38:35.407410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:50.526 [2024-07-24 21:38:35.411869] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:50.526 [2024-07-24 21:38:35.411900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.526 [2024-07-24 21:38:35.411912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:50.526 [2024-07-24 21:38:35.416175] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:50.526 [2024-07-24 21:38:35.416208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.526 [2024-07-24 21:38:35.416220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:50.526 [2024-07-24 21:38:35.420745] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:50.526 [2024-07-24 21:38:35.420776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.526 [2024-07-24 21:38:35.420788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:50.526 [2024-07-24 21:38:35.425251] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:50.526 [2024-07-24 21:38:35.425283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.526 [2024-07-24 21:38:35.425303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:50.526 [2024-07-24 21:38:35.429755] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:50.526 [2024-07-24 21:38:35.429786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.526 [2024-07-24 21:38:35.429806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:50.526 [2024-07-24 21:38:35.434506] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:50.526 [2024-07-24 21:38:35.434550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.526 [2024-07-24 21:38:35.434571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:50.526 [2024-07-24 21:38:35.439231] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:50.526 [2024-07-24 21:38:35.439264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.526 [2024-07-24 21:38:35.439276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:50.526 [2024-07-24 21:38:35.444054] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:50.526 [2024-07-24 21:38:35.444087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.526 [2024-07-24 21:38:35.444106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:50.526 [2024-07-24 21:38:35.448847] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:50.526 [2024-07-24 21:38:35.448880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.526 [2024-07-24 21:38:35.448892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:50.526 [2024-07-24 21:38:35.453440] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:50.526 [2024-07-24 21:38:35.453473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.526 [2024-07-24 21:38:35.453492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:50.526 [2024-07-24 21:38:35.457905] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:50.526 [2024-07-24 21:38:35.457938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.526 [2024-07-24 21:38:35.457949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:50.527 [2024-07-24 21:38:35.462411] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:50.527 [2024-07-24 21:38:35.462442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.527 [2024-07-24 21:38:35.462453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:50.527 [2024-07-24 21:38:35.466891] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:50.527 [2024-07-24 21:38:35.466922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.527 [2024-07-24 21:38:35.466944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:50.527 [2024-07-24 21:38:35.471295] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:50.527 [2024-07-24 21:38:35.471327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.527 [2024-07-24 21:38:35.471339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:50.527 [2024-07-24 21:38:35.475793] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:50.527 [2024-07-24 21:38:35.475825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.527 [2024-07-24 21:38:35.475844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:50.527 [2024-07-24 21:38:35.480329] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:50.527 [2024-07-24 21:38:35.480362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.527 [2024-07-24 21:38:35.480382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:50.527 [2024-07-24 21:38:35.484776] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:50.527 [2024-07-24 21:38:35.484808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.527 [2024-07-24 21:38:35.484826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:50.527 [2024-07-24 21:38:35.489055] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:50.527 [2024-07-24 21:38:35.489089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.527 [2024-07-24 21:38:35.489110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:50.527 [2024-07-24 21:38:35.493460] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:50.527 [2024-07-24 21:38:35.493493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.527 [2024-07-24 21:38:35.493512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:50.527 [2024-07-24 21:38:35.497930] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:50.527 [2024-07-24 21:38:35.497963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.527 [2024-07-24 21:38:35.497984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:50.527 [2024-07-24 21:38:35.502294] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:50.527 [2024-07-24 21:38:35.502327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.527 [2024-07-24 21:38:35.502346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:50.527 [2024-07-24 21:38:35.506760] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:50.527 [2024-07-24 21:38:35.506794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.527 [2024-07-24 21:38:35.506813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:50.527 [2024-07-24 21:38:35.511265] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:50.527 [2024-07-24 21:38:35.511301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.527 [2024-07-24 21:38:35.511314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:50.527 [2024-07-24 21:38:35.515870] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:50.527 [2024-07-24 21:38:35.515902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.527 [2024-07-24 21:38:35.515924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:50.527 [2024-07-24 21:38:35.520352] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:50.527 [2024-07-24 21:38:35.520386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.527 [2024-07-24 21:38:35.520404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:50.787 [2024-07-24 21:38:35.525192] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:50.787 [2024-07-24 21:38:35.525227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.788 [2024-07-24 21:38:35.525248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:50.788 [2024-07-24 21:38:35.529941] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:50.788 [2024-07-24 21:38:35.529974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.788 [2024-07-24 21:38:35.530001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:50.788 [2024-07-24 21:38:35.534434] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:50.788 [2024-07-24 21:38:35.534466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.788 [2024-07-24 21:38:35.534485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:50.788 [2024-07-24 21:38:35.538848] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:50.788 [2024-07-24 21:38:35.538880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.788 [2024-07-24 21:38:35.538899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:50.788 [2024-07-24 21:38:35.543279] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:50.788 [2024-07-24 21:38:35.543313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.788 [2024-07-24 21:38:35.543326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:50.788 [2024-07-24 21:38:35.547919] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:50.788 [2024-07-24 21:38:35.547951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.788 [2024-07-24 21:38:35.547971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:50.788 [2024-07-24 21:38:35.552416] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:50.788 [2024-07-24 21:38:35.552448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.788 [2024-07-24 21:38:35.552467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:50.788 [2024-07-24 21:38:35.556980] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:50.788 [2024-07-24 21:38:35.557012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.788 [2024-07-24 21:38:35.557033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:50.788 [2024-07-24 21:38:35.561454] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:50.788 [2024-07-24 21:38:35.561486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.788 [2024-07-24 21:38:35.561505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:50.788 [2024-07-24 21:38:35.566015] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:50.788 [2024-07-24 21:38:35.566048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.788 [2024-07-24 21:38:35.566059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:50.788 [2024-07-24 21:38:35.570546] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:50.788 [2024-07-24 21:38:35.570578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.788 [2024-07-24 21:38:35.570598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:50.788 [2024-07-24 21:38:35.575344] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:50.788 [2024-07-24 21:38:35.575376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.788 [2024-07-24 21:38:35.575391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:50.788 [2024-07-24 21:38:35.580027] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:50.788 [2024-07-24 21:38:35.580059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.788 [2024-07-24 21:38:35.580071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:50.788 [2024-07-24 21:38:35.584730] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:50.788 [2024-07-24 21:38:35.584763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.788 [2024-07-24 21:38:35.584774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:50.788 [2024-07-24 21:38:35.589364] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:50.788 [2024-07-24 21:38:35.589397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.788 [2024-07-24 21:38:35.589417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:50.788 [2024-07-24 21:38:35.594017] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:50.788 [2024-07-24 21:38:35.594049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.788 [2024-07-24 21:38:35.594070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:50.788 [2024-07-24 21:38:35.598327] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:50.788 [2024-07-24 21:38:35.598360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.788 [2024-07-24 21:38:35.598381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:50.788 [2024-07-24 21:38:35.602772] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:50.788 [2024-07-24 21:38:35.602804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.788 [2024-07-24 21:38:35.602825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:50.788 [2024-07-24 21:38:35.607368] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:50.788 [2024-07-24 21:38:35.607411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.788 [2024-07-24 21:38:35.607427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:50.788 [2024-07-24 21:38:35.612023] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:50.788 [2024-07-24 21:38:35.612066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.788 [2024-07-24 21:38:35.612089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:50.788 [2024-07-24 21:38:35.616544] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:50.788 [2024-07-24 21:38:35.616577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.788 [2024-07-24 21:38:35.616597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:50.788 [2024-07-24 21:38:35.621114] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:50.788 [2024-07-24 21:38:35.621147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.788 [2024-07-24 21:38:35.621166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:50.788 [2024-07-24 21:38:35.625468] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:50.788 [2024-07-24 21:38:35.625501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.788 [2024-07-24 21:38:35.625520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:50.788 [2024-07-24 21:38:35.629974] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:50.788 [2024-07-24 21:38:35.630007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.788 [2024-07-24 21:38:35.630019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:50.788 [2024-07-24 21:38:35.634390] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:50.788 [2024-07-24 21:38:35.634423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.788 [2024-07-24 21:38:35.634434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:50.788 [2024-07-24 21:38:35.638844] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:50.788 [2024-07-24 21:38:35.638877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.788 [2024-07-24 21:38:35.638889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:50.788 [2024-07-24 21:38:35.643257] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:50.788 [2024-07-24 21:38:35.643290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.788 [2024-07-24 21:38:35.643301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:50.788 [2024-07-24 21:38:35.647868] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:50.789 [2024-07-24 21:38:35.647899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.789 [2024-07-24 21:38:35.647920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:50.789 [2024-07-24 21:38:35.652161] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:50.789 [2024-07-24 21:38:35.652193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.789 [2024-07-24 21:38:35.652212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:50.789 [2024-07-24 21:38:35.656646] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:50.789 [2024-07-24 21:38:35.656678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.789 [2024-07-24 21:38:35.656697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:50.789 [2024-07-24 21:38:35.661043] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:50.789 [2024-07-24 21:38:35.661075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.789 [2024-07-24 21:38:35.661095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:50.789 [2024-07-24 21:38:35.665461] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:50.789 [2024-07-24 21:38:35.665493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.789 [2024-07-24 21:38:35.665513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:50.789 [2024-07-24 21:38:35.669804] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:50.789 [2024-07-24 21:38:35.669838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.789 [2024-07-24 21:38:35.669857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:50.789 [2024-07-24 21:38:35.674092] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:50.789 [2024-07-24 21:38:35.674124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.789 [2024-07-24 21:38:35.674146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:50.789 [2024-07-24 21:38:35.678383] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:50.789 [2024-07-24 21:38:35.678416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.789 [2024-07-24 21:38:35.678436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:50.789 [2024-07-24 21:38:35.682794] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:50.789 [2024-07-24 21:38:35.682827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.789 [2024-07-24 21:38:35.682838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:50.789 [2024-07-24 21:38:35.687061] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:50.789 [2024-07-24 21:38:35.687095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.789 [2024-07-24 21:38:35.687107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:50.789 [2024-07-24 21:38:35.691387] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:50.789 [2024-07-24 21:38:35.691420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.789 [2024-07-24 21:38:35.691440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:50.789 [2024-07-24 21:38:35.695758] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:50.789 [2024-07-24 21:38:35.695789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.789 [2024-07-24 21:38:35.695810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:50.789 [2024-07-24 21:38:35.700091] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:50.789 [2024-07-24 21:38:35.700122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.789 [2024-07-24 21:38:35.700134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:50.789 [2024-07-24 21:38:35.704380] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:50.789 [2024-07-24 21:38:35.704414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.789 [2024-07-24 21:38:35.704433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:50.789 [2024-07-24 21:38:35.708837] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:50.789 [2024-07-24 21:38:35.708871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.789 [2024-07-24 21:38:35.708882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:50.789 [2024-07-24 21:38:35.713169] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:50.789 [2024-07-24 21:38:35.713201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.789 [2024-07-24 21:38:35.713221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:50.789 [2024-07-24 21:38:35.717507] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:50.789 [2024-07-24 21:38:35.717540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.789 [2024-07-24 21:38:35.717560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:50.789 [2024-07-24 21:38:35.721890] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:50.789 [2024-07-24 21:38:35.721938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.789 [2024-07-24 21:38:35.721958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:50.789 [2024-07-24 21:38:35.726918] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:50.789 [2024-07-24 21:38:35.726954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.789 [2024-07-24 21:38:35.726983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:50.789 [2024-07-24 21:38:35.732200] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:50.789 [2024-07-24 21:38:35.732235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.789 [2024-07-24 21:38:35.732255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:50.789 [2024-07-24 21:38:35.737635] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:50.789 [2024-07-24 21:38:35.737683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.789 [2024-07-24 21:38:35.737697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:50.789 [2024-07-24 21:38:35.742326] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:50.789 [2024-07-24 21:38:35.742359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.789 [2024-07-24 21:38:35.742378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:50.789 [2024-07-24 21:38:35.746809] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:50.789 [2024-07-24 21:38:35.746842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.789 [2024-07-24 21:38:35.746854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:50.789 [2024-07-24 21:38:35.751198] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:50.789 [2024-07-24 21:38:35.751233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.789 [2024-07-24 21:38:35.751245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:50.789 [2024-07-24 21:38:35.755612] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:50.789 [2024-07-24 21:38:35.755655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.789 [2024-07-24 21:38:35.755668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:50.789 [2024-07-24 21:38:35.760198] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:50.789 [2024-07-24 21:38:35.760231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.789 [2024-07-24 21:38:35.760250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:50.789 [2024-07-24 21:38:35.764596] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:50.789 [2024-07-24 21:38:35.764635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.789 [2024-07-24 21:38:35.764649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:50.789 [2024-07-24 21:38:35.768954] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:50.790 [2024-07-24 21:38:35.768987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.790 [2024-07-24 21:38:35.769006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:50.790 [2024-07-24 21:38:35.773312] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:50.790 [2024-07-24 21:38:35.773344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.790 [2024-07-24 21:38:35.773362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:50.790 [2024-07-24 21:38:35.777715] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:50.790 [2024-07-24 21:38:35.777745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.790 [2024-07-24 21:38:35.777765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:50.790 [2024-07-24 21:38:35.782163] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:50.790 [2024-07-24 21:38:35.782197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.790 [2024-07-24 21:38:35.782217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:50.790 [2024-07-24 21:38:35.786740] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:50.790 [2024-07-24 21:38:35.786772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.790 [2024-07-24 21:38:35.786791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.050 [2024-07-24 21:38:35.791226] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.050 [2024-07-24 21:38:35.791260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.050 [2024-07-24 21:38:35.791272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:51.050 [2024-07-24 21:38:35.795972] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.050 [2024-07-24 21:38:35.796003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.050 [2024-07-24 21:38:35.796022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:51.050 [2024-07-24 21:38:35.800354] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.050 [2024-07-24 21:38:35.800387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.050 [2024-07-24 21:38:35.800404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:51.050 [2024-07-24 21:38:35.804769] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.050 [2024-07-24 21:38:35.804801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.050 [2024-07-24 21:38:35.804820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.050 [2024-07-24 21:38:35.809273] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.050 [2024-07-24 21:38:35.809305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.050 [2024-07-24 21:38:35.809324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:51.050 [2024-07-24 21:38:35.813836] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.050 [2024-07-24 21:38:35.813868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.050 [2024-07-24 21:38:35.813887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:51.050 [2024-07-24 21:38:35.818139] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.050 [2024-07-24 21:38:35.818171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.050 [2024-07-24 21:38:35.818189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:51.050 [2024-07-24 21:38:35.822513] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.050 [2024-07-24 21:38:35.822545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.050 [2024-07-24 21:38:35.822564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.050 [2024-07-24 21:38:35.827073] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.050 [2024-07-24 21:38:35.827108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.050 [2024-07-24 21:38:35.827120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:51.050 [2024-07-24 21:38:35.831590] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.050 [2024-07-24 21:38:35.831630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.050 [2024-07-24 21:38:35.831643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:51.050 [2024-07-24 21:38:35.835918] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.050 [2024-07-24 21:38:35.835951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.050 [2024-07-24 21:38:35.835970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:51.050 [2024-07-24 21:38:35.840205] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.050 [2024-07-24 21:38:35.840237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.050 [2024-07-24 21:38:35.840258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.050 [2024-07-24 21:38:35.844661] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.050 [2024-07-24 21:38:35.844693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.050 [2024-07-24 21:38:35.844704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:51.050 [2024-07-24 21:38:35.848988] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.050 [2024-07-24 21:38:35.849020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.050 [2024-07-24 21:38:35.849031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:51.050 [2024-07-24 21:38:35.853272] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.050 [2024-07-24 21:38:35.853304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.050 [2024-07-24 21:38:35.853323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:51.050 [2024-07-24 21:38:35.857696] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.050 [2024-07-24 21:38:35.857727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.050 [2024-07-24 21:38:35.857747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.050 [2024-07-24 21:38:35.862036] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.050 [2024-07-24 21:38:35.862069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.050 [2024-07-24 21:38:35.862089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:51.050 [2024-07-24 21:38:35.866474] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.050 [2024-07-24 21:38:35.866503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.050 [2024-07-24 21:38:35.866514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:51.050 [2024-07-24 21:38:35.870853] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.050 [2024-07-24 21:38:35.870881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.050 [2024-07-24 21:38:35.870893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:51.050 [2024-07-24 21:38:35.875220] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.050 [2024-07-24 21:38:35.875249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.050 [2024-07-24 21:38:35.875260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.050 [2024-07-24 21:38:35.879609] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.050 [2024-07-24 21:38:35.879653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.050 [2024-07-24 21:38:35.879665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:51.050 [2024-07-24 21:38:35.883941] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.050 [2024-07-24 21:38:35.883969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.050 [2024-07-24 21:38:35.883980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:51.050 [2024-07-24 21:38:35.888259] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.050 [2024-07-24 21:38:35.888287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.050 [2024-07-24 21:38:35.888299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:51.050 [2024-07-24 21:38:35.892876] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.050 [2024-07-24 21:38:35.892904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.050 [2024-07-24 21:38:35.892916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.050 [2024-07-24 21:38:35.897414] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.050 [2024-07-24 21:38:35.897451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.050 [2024-07-24 21:38:35.897472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:51.050 [2024-07-24 21:38:35.901868] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.050 [2024-07-24 21:38:35.901896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.051 [2024-07-24 21:38:35.901907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:51.051 [2024-07-24 21:38:35.906215] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.051 [2024-07-24 21:38:35.906243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.051 [2024-07-24 21:38:35.906256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:51.051 [2024-07-24 21:38:35.910503] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.051 [2024-07-24 21:38:35.910531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.051 [2024-07-24 21:38:35.910543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.051 [2024-07-24 21:38:35.914836] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.051 [2024-07-24 21:38:35.914864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.051 [2024-07-24 21:38:35.914876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:51.051 [2024-07-24 21:38:35.919157] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.051 [2024-07-24 21:38:35.919185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.051 [2024-07-24 21:38:35.919195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:51.051 [2024-07-24 21:38:35.923524] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.051 [2024-07-24 21:38:35.923552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.051 [2024-07-24 21:38:35.923565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:51.051 [2024-07-24 21:38:35.927969] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.051 [2024-07-24 21:38:35.927997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.051 [2024-07-24 21:38:35.928009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.051 [2024-07-24 21:38:35.932261] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.051 [2024-07-24 21:38:35.932289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.051 [2024-07-24 21:38:35.932301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:51.051 [2024-07-24 21:38:35.936757] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.051 [2024-07-24 21:38:35.936785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.051 [2024-07-24 21:38:35.936796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:51.051 [2024-07-24 21:38:35.941254] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.051 [2024-07-24 21:38:35.941283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.051 [2024-07-24 21:38:35.941295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:51.051 [2024-07-24 21:38:35.945683] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.051 [2024-07-24 21:38:35.945709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.051 [2024-07-24 21:38:35.945720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.051 [2024-07-24 21:38:35.949987] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.051 [2024-07-24 21:38:35.950016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.051 [2024-07-24 21:38:35.950027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:51.051 [2024-07-24 21:38:35.954280] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.051 [2024-07-24 21:38:35.954309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.051 [2024-07-24 21:38:35.954320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:51.051 [2024-07-24 21:38:35.958636] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.051 [2024-07-24 21:38:35.958675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.051 [2024-07-24 21:38:35.958686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:51.051 [2024-07-24 21:38:35.963004] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.051 [2024-07-24 21:38:35.963049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.051 [2024-07-24 21:38:35.963060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.051 [2024-07-24 21:38:35.967298] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.051 [2024-07-24 21:38:35.967326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.051 [2024-07-24 21:38:35.967346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:51.051 [2024-07-24 21:38:35.971613] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.051 [2024-07-24 21:38:35.971650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.051 [2024-07-24 21:38:35.971663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:51.051 [2024-07-24 21:38:35.976020] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.051 [2024-07-24 21:38:35.976048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.051 [2024-07-24 21:38:35.976060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:51.051 [2024-07-24 21:38:35.980368] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.051 [2024-07-24 21:38:35.980397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.051 [2024-07-24 21:38:35.980408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.051 [2024-07-24 21:38:35.984754] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.051 [2024-07-24 21:38:35.984781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.051 [2024-07-24 21:38:35.984791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:51.051 [2024-07-24 21:38:35.989081] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.051 [2024-07-24 21:38:35.989110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.051 [2024-07-24 21:38:35.989120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:51.051 [2024-07-24 21:38:35.993408] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.051 [2024-07-24 21:38:35.993436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.051 [2024-07-24 21:38:35.993448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:51.051 [2024-07-24 21:38:35.997725] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.051 [2024-07-24 21:38:35.997753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.051 [2024-07-24 21:38:35.997764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.051 [2024-07-24 21:38:36.002018] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.051 [2024-07-24 21:38:36.002046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.051 [2024-07-24 21:38:36.002056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:51.051 [2024-07-24 21:38:36.006404] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.051 [2024-07-24 21:38:36.006432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.051 [2024-07-24 21:38:36.006444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:51.051 [2024-07-24 21:38:36.010759] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.051 [2024-07-24 21:38:36.010787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.051 [2024-07-24 21:38:36.010800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:51.051 [2024-07-24 21:38:36.015112] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.051 [2024-07-24 21:38:36.015141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.051 [2024-07-24 21:38:36.015151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.051 [2024-07-24 21:38:36.019520] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.052 [2024-07-24 21:38:36.019549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.052 [2024-07-24 21:38:36.019560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:51.052 [2024-07-24 21:38:36.023874] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.052 [2024-07-24 21:38:36.023901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.052 [2024-07-24 21:38:36.023915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:51.052 [2024-07-24 21:38:36.028114] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.052 [2024-07-24 21:38:36.028143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.052 [2024-07-24 21:38:36.028155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:51.052 [2024-07-24 21:38:36.032460] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.052 [2024-07-24 21:38:36.032487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.052 [2024-07-24 21:38:36.032498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.052 [2024-07-24 21:38:36.036778] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.052 [2024-07-24 21:38:36.036805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.052 [2024-07-24 21:38:36.036819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:51.052 [2024-07-24 21:38:36.041120] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.052 [2024-07-24 21:38:36.041147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.052 [2024-07-24 21:38:36.041160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:51.052 [2024-07-24 21:38:36.045433] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.052 [2024-07-24 21:38:36.045460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.052 [2024-07-24 21:38:36.045473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:51.312 [2024-07-24 21:38:36.050117] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.312 [2024-07-24 21:38:36.050153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.312 [2024-07-24 21:38:36.050163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.312 [2024-07-24 21:38:36.054681] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.312 [2024-07-24 21:38:36.054708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.312 [2024-07-24 21:38:36.054721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:51.312 [2024-07-24 21:38:36.059110] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.312 [2024-07-24 21:38:36.059138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.312 [2024-07-24 21:38:36.059148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:51.312 [2024-07-24 21:38:36.063520] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.312 [2024-07-24 21:38:36.063548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.312 [2024-07-24 21:38:36.063562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:51.312 [2024-07-24 21:38:36.067879] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.312 [2024-07-24 21:38:36.067907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.312 [2024-07-24 21:38:36.067921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.312 [2024-07-24 21:38:36.072191] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.312 [2024-07-24 21:38:36.072220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.312 [2024-07-24 21:38:36.072233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:51.312 [2024-07-24 21:38:36.076560] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.312 [2024-07-24 21:38:36.076588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.312 [2024-07-24 21:38:36.076601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:51.312 [2024-07-24 21:38:36.081042] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.312 [2024-07-24 21:38:36.081070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.312 [2024-07-24 21:38:36.081083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:51.312 [2024-07-24 21:38:36.085356] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.312 [2024-07-24 21:38:36.085384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.312 [2024-07-24 21:38:36.085396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.312 [2024-07-24 21:38:36.089725] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.312 [2024-07-24 21:38:36.089752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.312 [2024-07-24 21:38:36.089765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:51.312 [2024-07-24 21:38:36.093986] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.313 [2024-07-24 21:38:36.094014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.313 [2024-07-24 21:38:36.094027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:51.313 [2024-07-24 21:38:36.098348] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.313 [2024-07-24 21:38:36.098376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.313 [2024-07-24 21:38:36.098386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:51.313 [2024-07-24 21:38:36.102711] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.313 [2024-07-24 21:38:36.102738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.313 [2024-07-24 21:38:36.102752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.313 [2024-07-24 21:38:36.106954] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.313 [2024-07-24 21:38:36.106983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.313 [2024-07-24 21:38:36.106997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:51.313 [2024-07-24 21:38:36.111350] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.313 [2024-07-24 21:38:36.111384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.313 [2024-07-24 21:38:36.111400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:51.313 [2024-07-24 21:38:36.116010] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.313 [2024-07-24 21:38:36.116050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.313 [2024-07-24 21:38:36.116087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:51.313 [2024-07-24 21:38:36.120596] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.313 [2024-07-24 21:38:36.120641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.313 [2024-07-24 21:38:36.120653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.313 [2024-07-24 21:38:36.125309] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.313 [2024-07-24 21:38:36.125339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.313 [2024-07-24 21:38:36.125352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:51.313 [2024-07-24 21:38:36.130303] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.313 [2024-07-24 21:38:36.130334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.313 [2024-07-24 21:38:36.130346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:51.313 [2024-07-24 21:38:36.135138] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.313 [2024-07-24 21:38:36.135182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.313 [2024-07-24 21:38:36.135193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:51.313 [2024-07-24 21:38:36.139989] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.313 [2024-07-24 21:38:36.140033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.313 [2024-07-24 21:38:36.140054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.313 [2024-07-24 21:38:36.144710] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.313 [2024-07-24 21:38:36.144743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.313 [2024-07-24 21:38:36.144754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:51.313 [2024-07-24 21:38:36.149311] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.313 [2024-07-24 21:38:36.149339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.313 [2024-07-24 21:38:36.149350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:51.313 [2024-07-24 21:38:36.153879] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.313 [2024-07-24 21:38:36.153907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.313 [2024-07-24 21:38:36.153919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:51.313 [2024-07-24 21:38:36.158331] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.313 [2024-07-24 21:38:36.158359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.313 [2024-07-24 21:38:36.158371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.313 [2024-07-24 21:38:36.162789] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.313 [2024-07-24 21:38:36.162816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.313 [2024-07-24 21:38:36.162829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:51.313 [2024-07-24 21:38:36.167059] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.313 [2024-07-24 21:38:36.167087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.313 [2024-07-24 21:38:36.167098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:51.313 [2024-07-24 21:38:36.171888] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.313 [2024-07-24 21:38:36.171947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.313 [2024-07-24 21:38:36.171958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:51.313 [2024-07-24 21:38:36.176743] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.313 [2024-07-24 21:38:36.176774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.313 [2024-07-24 21:38:36.176787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.313 [2024-07-24 21:38:36.181454] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.313 [2024-07-24 21:38:36.181483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.313 [2024-07-24 21:38:36.181495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:51.313 [2024-07-24 21:38:36.186302] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.313 [2024-07-24 21:38:36.186330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.313 [2024-07-24 21:38:36.186341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:51.313 [2024-07-24 21:38:36.191056] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.313 [2024-07-24 21:38:36.191087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.313 [2024-07-24 21:38:36.191100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:51.313 [2024-07-24 21:38:36.195833] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.313 [2024-07-24 21:38:36.195864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.313 [2024-07-24 21:38:36.195876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.313 [2024-07-24 21:38:36.200796] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.313 [2024-07-24 21:38:36.200827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.313 [2024-07-24 21:38:36.200839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:51.313 [2024-07-24 21:38:36.206024] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.313 [2024-07-24 21:38:36.206054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.313 [2024-07-24 21:38:36.206065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:51.313 [2024-07-24 21:38:36.211538] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.313 [2024-07-24 21:38:36.211572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.313 [2024-07-24 21:38:36.211585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:51.313 [2024-07-24 21:38:36.216806] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.313 [2024-07-24 21:38:36.216840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.313 [2024-07-24 21:38:36.216853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.314 [2024-07-24 21:38:36.222250] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.314 [2024-07-24 21:38:36.222296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.314 [2024-07-24 21:38:36.222309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:51.314 [2024-07-24 21:38:36.227523] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.314 [2024-07-24 21:38:36.227581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.314 [2024-07-24 21:38:36.227595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:51.314 [2024-07-24 21:38:36.233169] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.314 [2024-07-24 21:38:36.233214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.314 [2024-07-24 21:38:36.233242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:51.314 [2024-07-24 21:38:36.238287] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.314 [2024-07-24 21:38:36.238315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.314 [2024-07-24 21:38:36.238327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.314 [2024-07-24 21:38:36.243501] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.314 [2024-07-24 21:38:36.243529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.314 [2024-07-24 21:38:36.243542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:51.314 [2024-07-24 21:38:36.248618] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.314 [2024-07-24 21:38:36.248664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.314 [2024-07-24 21:38:36.248678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:51.314 [2024-07-24 21:38:36.253500] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.314 [2024-07-24 21:38:36.253529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.314 [2024-07-24 21:38:36.253541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:51.314 [2024-07-24 21:38:36.258461] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.314 [2024-07-24 21:38:36.258489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.314 [2024-07-24 21:38:36.258499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.314 [2024-07-24 21:38:36.263579] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.314 [2024-07-24 21:38:36.263609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.314 [2024-07-24 21:38:36.263630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:51.314 [2024-07-24 21:38:36.268495] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.314 [2024-07-24 21:38:36.268523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.314 [2024-07-24 21:38:36.268534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:51.314 [2024-07-24 21:38:36.273108] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.314 [2024-07-24 21:38:36.273137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.314 [2024-07-24 21:38:36.273149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:51.314 [2024-07-24 21:38:36.277543] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.314 [2024-07-24 21:38:36.277587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.314 [2024-07-24 21:38:36.277598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.314 [2024-07-24 21:38:36.282098] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.314 [2024-07-24 21:38:36.282134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.314 [2024-07-24 21:38:36.282156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:51.314 [2024-07-24 21:38:36.286923] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.314 [2024-07-24 21:38:36.286951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.314 [2024-07-24 21:38:36.286964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:51.314 [2024-07-24 21:38:36.291989] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.314 [2024-07-24 21:38:36.292018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.314 [2024-07-24 21:38:36.292033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:51.314 [2024-07-24 21:38:36.296975] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.314 [2024-07-24 21:38:36.297004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.314 [2024-07-24 21:38:36.297015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.314 [2024-07-24 21:38:36.302078] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.314 [2024-07-24 21:38:36.302107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.314 [2024-07-24 21:38:36.302121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:51.314 [2024-07-24 21:38:36.307003] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.314 [2024-07-24 21:38:36.307060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.314 [2024-07-24 21:38:36.307073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:51.314 [2024-07-24 21:38:36.312254] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.314 [2024-07-24 21:38:36.312282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.314 [2024-07-24 21:38:36.312310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:51.575 [2024-07-24 21:38:36.317308] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.575 [2024-07-24 21:38:36.317338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.575 [2024-07-24 21:38:36.317350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.575 [2024-07-24 21:38:36.322068] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.575 [2024-07-24 21:38:36.322113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.575 [2024-07-24 21:38:36.322123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:51.575 [2024-07-24 21:38:36.326610] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.575 [2024-07-24 21:38:36.326648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.575 [2024-07-24 21:38:36.326670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:51.575 [2024-07-24 21:38:36.331219] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.575 [2024-07-24 21:38:36.331249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.575 [2024-07-24 21:38:36.331261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:51.575 [2024-07-24 21:38:36.336066] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.575 [2024-07-24 21:38:36.336095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.575 [2024-07-24 21:38:36.336107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.575 [2024-07-24 21:38:36.340805] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.575 [2024-07-24 21:38:36.340835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.575 [2024-07-24 21:38:36.340846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:51.575 [2024-07-24 21:38:36.345404] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.575 [2024-07-24 21:38:36.345432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.575 [2024-07-24 21:38:36.345445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:51.575 [2024-07-24 21:38:36.350197] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.575 [2024-07-24 21:38:36.350225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.575 [2024-07-24 21:38:36.350237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:51.575 [2024-07-24 21:38:36.354713] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.575 [2024-07-24 21:38:36.354741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.575 [2024-07-24 21:38:36.354752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.575 [2024-07-24 21:38:36.359176] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.575 [2024-07-24 21:38:36.359216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.575 [2024-07-24 21:38:36.359227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:51.575 [2024-07-24 21:38:36.363825] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.575 [2024-07-24 21:38:36.363856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.575 [2024-07-24 21:38:36.363868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:51.575 [2024-07-24 21:38:36.369068] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.575 [2024-07-24 21:38:36.369105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.575 [2024-07-24 21:38:36.369127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:51.575 [2024-07-24 21:38:36.373763] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.575 [2024-07-24 21:38:36.373790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.575 [2024-07-24 21:38:36.373802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.575 [2024-07-24 21:38:36.378323] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.575 [2024-07-24 21:38:36.378351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.575 [2024-07-24 21:38:36.378364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:51.575 [2024-07-24 21:38:36.383271] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.575 [2024-07-24 21:38:36.383303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.575 [2024-07-24 21:38:36.383314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:51.575 [2024-07-24 21:38:36.387995] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.575 [2024-07-24 21:38:36.388024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.575 [2024-07-24 21:38:36.388035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:51.575 [2024-07-24 21:38:36.392642] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.575 [2024-07-24 21:38:36.392685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.575 [2024-07-24 21:38:36.392696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.575 [2024-07-24 21:38:36.397068] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.575 [2024-07-24 21:38:36.397097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.575 [2024-07-24 21:38:36.397110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:51.575 [2024-07-24 21:38:36.401470] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.575 [2024-07-24 21:38:36.401499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.575 [2024-07-24 21:38:36.401511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:51.575 [2024-07-24 21:38:36.406161] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.575 [2024-07-24 21:38:36.406189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.575 [2024-07-24 21:38:36.406201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:51.575 [2024-07-24 21:38:36.410820] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.575 [2024-07-24 21:38:36.410848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.575 [2024-07-24 21:38:36.410859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.575 [2024-07-24 21:38:36.415178] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.575 [2024-07-24 21:38:36.415208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.575 [2024-07-24 21:38:36.415235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:51.575 [2024-07-24 21:38:36.419822] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.576 [2024-07-24 21:38:36.419851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.576 [2024-07-24 21:38:36.419863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:51.576 [2024-07-24 21:38:36.424361] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.576 [2024-07-24 21:38:36.424389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.576 [2024-07-24 21:38:36.424402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:51.576 [2024-07-24 21:38:36.428970] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.576 [2024-07-24 21:38:36.428998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.576 [2024-07-24 21:38:36.429010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.576 [2024-07-24 21:38:36.433429] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.576 [2024-07-24 21:38:36.433457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.576 [2024-07-24 21:38:36.433469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:51.576 [2024-07-24 21:38:36.438537] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.576 [2024-07-24 21:38:36.438568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.576 [2024-07-24 21:38:36.438580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:51.576 [2024-07-24 21:38:36.443467] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.576 [2024-07-24 21:38:36.443498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.576 [2024-07-24 21:38:36.443520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:51.576 [2024-07-24 21:38:36.448121] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.576 [2024-07-24 21:38:36.448149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.576 [2024-07-24 21:38:36.448160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.576 [2024-07-24 21:38:36.452570] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.576 [2024-07-24 21:38:36.452615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.576 [2024-07-24 21:38:36.452638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:51.576 [2024-07-24 21:38:36.457102] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.576 [2024-07-24 21:38:36.457131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.576 [2024-07-24 21:38:36.457144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:51.576 [2024-07-24 21:38:36.461725] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.576 [2024-07-24 21:38:36.461753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.576 [2024-07-24 21:38:36.461765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:51.576 [2024-07-24 21:38:36.466126] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.576 [2024-07-24 21:38:36.466154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.576 [2024-07-24 21:38:36.466167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.576 [2024-07-24 21:38:36.470721] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.576 [2024-07-24 21:38:36.470749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.576 [2024-07-24 21:38:36.470762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:51.576 [2024-07-24 21:38:36.475201] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.576 [2024-07-24 21:38:36.475232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.576 [2024-07-24 21:38:36.475243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:51.576 [2024-07-24 21:38:36.479977] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.576 [2024-07-24 21:38:36.480007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.576 [2024-07-24 21:38:36.480018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:51.576 [2024-07-24 21:38:36.484731] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.576 [2024-07-24 21:38:36.484762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.576 [2024-07-24 21:38:36.484774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.576 [2024-07-24 21:38:36.489268] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.576 [2024-07-24 21:38:36.489299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.576 [2024-07-24 21:38:36.489310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:51.576 [2024-07-24 21:38:36.493777] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.576 [2024-07-24 21:38:36.493805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.576 [2024-07-24 21:38:36.493816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:51.576 [2024-07-24 21:38:36.498160] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.576 [2024-07-24 21:38:36.498189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.576 [2024-07-24 21:38:36.498202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:51.576 [2024-07-24 21:38:36.502653] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.576 [2024-07-24 21:38:36.502680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.576 [2024-07-24 21:38:36.502694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.576 [2024-07-24 21:38:36.507000] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.576 [2024-07-24 21:38:36.507075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.576 [2024-07-24 21:38:36.507088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:51.576 [2024-07-24 21:38:36.511585] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.576 [2024-07-24 21:38:36.511614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.576 [2024-07-24 21:38:36.511636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:51.576 [2024-07-24 21:38:36.515939] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.576 [2024-07-24 21:38:36.515967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.576 [2024-07-24 21:38:36.515980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:51.576 [2024-07-24 21:38:36.520497] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.576 [2024-07-24 21:38:36.520526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.576 [2024-07-24 21:38:36.520537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.576 [2024-07-24 21:38:36.525406] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.576 [2024-07-24 21:38:36.525435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.576 [2024-07-24 21:38:36.525450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:51.576 [2024-07-24 21:38:36.530266] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.576 [2024-07-24 21:38:36.530295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.576 [2024-07-24 21:38:36.530307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:51.576 [2024-07-24 21:38:36.534995] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.576 [2024-07-24 21:38:36.535023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.576 [2024-07-24 21:38:36.535072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:51.576 [2024-07-24 21:38:36.539999] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.576 [2024-07-24 21:38:36.540027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.576 [2024-07-24 21:38:36.540041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.576 [2024-07-24 21:38:36.545308] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.577 [2024-07-24 21:38:36.545336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.577 [2024-07-24 21:38:36.545349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:51.577 [2024-07-24 21:38:36.550222] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.577 [2024-07-24 21:38:36.550250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.577 [2024-07-24 21:38:36.550262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:51.577 [2024-07-24 21:38:36.555280] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.577 [2024-07-24 21:38:36.555314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.577 [2024-07-24 21:38:36.555327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:51.577 [2024-07-24 21:38:36.560137] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.577 [2024-07-24 21:38:36.560167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.577 [2024-07-24 21:38:36.560177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.577 [2024-07-24 21:38:36.564677] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.577 [2024-07-24 21:38:36.564717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.577 [2024-07-24 21:38:36.564729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:51.577 [2024-07-24 21:38:36.569276] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.577 [2024-07-24 21:38:36.569320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.577 [2024-07-24 21:38:36.569330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:51.577 [2024-07-24 21:38:36.574125] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.577 [2024-07-24 21:38:36.574153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.577 [2024-07-24 21:38:36.574166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:51.838 [2024-07-24 21:38:36.578829] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.838 [2024-07-24 21:38:36.578875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.838 [2024-07-24 21:38:36.578885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.838 [2024-07-24 21:38:36.583682] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.838 [2024-07-24 21:38:36.583722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.838 [2024-07-24 21:38:36.583741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:51.838 [2024-07-24 21:38:36.588327] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.838 [2024-07-24 21:38:36.588356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.838 [2024-07-24 21:38:36.588369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:51.838 [2024-07-24 21:38:36.593022] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.838 [2024-07-24 21:38:36.593050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.838 [2024-07-24 21:38:36.593063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:51.838 [2024-07-24 21:38:36.597670] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.838 [2024-07-24 21:38:36.597709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.838 [2024-07-24 21:38:36.597722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.838 [2024-07-24 21:38:36.602792] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.838 [2024-07-24 21:38:36.602823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.838 [2024-07-24 21:38:36.602835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:51.838 [2024-07-24 21:38:36.607852] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.838 [2024-07-24 21:38:36.607895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.838 [2024-07-24 21:38:36.607907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:51.838 [2024-07-24 21:38:36.612808] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.838 [2024-07-24 21:38:36.612838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.838 [2024-07-24 21:38:36.612855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:51.838 [2024-07-24 21:38:36.617668] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.838 [2024-07-24 21:38:36.617715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.838 [2024-07-24 21:38:36.617727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.838 [2024-07-24 21:38:36.622919] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.838 [2024-07-24 21:38:36.622979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.838 [2024-07-24 21:38:36.623000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:51.838 [2024-07-24 21:38:36.628164] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.838 [2024-07-24 21:38:36.628192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.838 [2024-07-24 21:38:36.628203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:51.838 [2024-07-24 21:38:36.633042] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.838 [2024-07-24 21:38:36.633079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.838 [2024-07-24 21:38:36.633103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:51.838 [2024-07-24 21:38:36.638070] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.838 [2024-07-24 21:38:36.638098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.838 [2024-07-24 21:38:36.638118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.838 [2024-07-24 21:38:36.642972] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.838 [2024-07-24 21:38:36.643000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.838 [2024-07-24 21:38:36.643015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:51.838 [2024-07-24 21:38:36.647697] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.838 [2024-07-24 21:38:36.647726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.838 [2024-07-24 21:38:36.647738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:51.838 [2024-07-24 21:38:36.652184] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.838 [2024-07-24 21:38:36.652212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.838 [2024-07-24 21:38:36.652224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:51.838 [2024-07-24 21:38:36.656879] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.838 [2024-07-24 21:38:36.656906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.838 [2024-07-24 21:38:36.656919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.838 [2024-07-24 21:38:36.661254] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.838 [2024-07-24 21:38:36.661283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.839 [2024-07-24 21:38:36.661293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:51.839 [2024-07-24 21:38:36.665652] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.839 [2024-07-24 21:38:36.665680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.839 [2024-07-24 21:38:36.665691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:51.839 [2024-07-24 21:38:36.669869] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.839 [2024-07-24 21:38:36.669896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.839 [2024-07-24 21:38:36.669908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:51.839 [2024-07-24 21:38:36.675003] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.839 [2024-07-24 21:38:36.675052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.839 [2024-07-24 21:38:36.675073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.839 [2024-07-24 21:38:36.679748] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.839 [2024-07-24 21:38:36.679816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.839 [2024-07-24 21:38:36.679827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:51.839 [2024-07-24 21:38:36.684309] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.839 [2024-07-24 21:38:36.684337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.839 [2024-07-24 21:38:36.684348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:51.839 [2024-07-24 21:38:36.688679] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.839 [2024-07-24 21:38:36.688707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.839 [2024-07-24 21:38:36.688718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:51.839 [2024-07-24 21:38:36.693170] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.839 [2024-07-24 21:38:36.693197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.839 [2024-07-24 21:38:36.693209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.839 [2024-07-24 21:38:36.697497] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.839 [2024-07-24 21:38:36.697526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.839 [2024-07-24 21:38:36.697539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:51.839 [2024-07-24 21:38:36.702022] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.839 [2024-07-24 21:38:36.702057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.839 [2024-07-24 21:38:36.702070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:51.839 [2024-07-24 21:38:36.706593] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.839 [2024-07-24 21:38:36.706639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.839 [2024-07-24 21:38:36.706660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:51.839 [2024-07-24 21:38:36.711163] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.839 [2024-07-24 21:38:36.711192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.839 [2024-07-24 21:38:36.711204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.839 [2024-07-24 21:38:36.715812] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.839 [2024-07-24 21:38:36.715839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.839 [2024-07-24 21:38:36.715852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:51.839 [2024-07-24 21:38:36.720327] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.839 [2024-07-24 21:38:36.720355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.839 [2024-07-24 21:38:36.720368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:51.839 [2024-07-24 21:38:36.724976] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.839 [2024-07-24 21:38:36.725004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.839 [2024-07-24 21:38:36.725014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:51.839 [2024-07-24 21:38:36.729283] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.839 [2024-07-24 21:38:36.729311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.839 [2024-07-24 21:38:36.729323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.839 [2024-07-24 21:38:36.733665] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.839 [2024-07-24 21:38:36.733693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.839 [2024-07-24 21:38:36.733706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:51.839 [2024-07-24 21:38:36.738019] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.839 [2024-07-24 21:38:36.738047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.839 [2024-07-24 21:38:36.738065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:51.839 [2024-07-24 21:38:36.742495] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.839 [2024-07-24 21:38:36.742522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.839 [2024-07-24 21:38:36.742534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:51.839 [2024-07-24 21:38:36.746930] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.839 [2024-07-24 21:38:36.746957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.839 [2024-07-24 21:38:36.746969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.839 [2024-07-24 21:38:36.751211] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.839 [2024-07-24 21:38:36.751240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.839 [2024-07-24 21:38:36.751251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:51.839 [2024-07-24 21:38:36.756255] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.839 [2024-07-24 21:38:36.756285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.839 [2024-07-24 21:38:36.756299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:51.839 [2024-07-24 21:38:36.760864] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.839 [2024-07-24 21:38:36.760909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.839 [2024-07-24 21:38:36.760921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:51.839 [2024-07-24 21:38:36.765354] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.839 [2024-07-24 21:38:36.765382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.839 [2024-07-24 21:38:36.765392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.839 [2024-07-24 21:38:36.770182] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.839 [2024-07-24 21:38:36.770211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.839 [2024-07-24 21:38:36.770221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:51.839 [2024-07-24 21:38:36.774641] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.839 [2024-07-24 21:38:36.774679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.839 [2024-07-24 21:38:36.774691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:51.839 [2024-07-24 21:38:36.779449] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.839 [2024-07-24 21:38:36.779476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.839 [2024-07-24 21:38:36.779488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:51.839 [2024-07-24 21:38:36.784509] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.840 [2024-07-24 21:38:36.784536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.840 [2024-07-24 21:38:36.784566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.840 [2024-07-24 21:38:36.789020] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.840 [2024-07-24 21:38:36.789048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.840 [2024-07-24 21:38:36.789067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:51.840 [2024-07-24 21:38:36.793522] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.840 [2024-07-24 21:38:36.793550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.840 [2024-07-24 21:38:36.793564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:51.840 [2024-07-24 21:38:36.798278] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.840 [2024-07-24 21:38:36.798306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.840 [2024-07-24 21:38:36.798318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:51.840 [2024-07-24 21:38:36.802749] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.840 [2024-07-24 21:38:36.802776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.840 [2024-07-24 21:38:36.802786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.840 [2024-07-24 21:38:36.807182] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.840 [2024-07-24 21:38:36.807210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.840 [2024-07-24 21:38:36.807221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:51.840 [2024-07-24 21:38:36.811725] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.840 [2024-07-24 21:38:36.811753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.840 [2024-07-24 21:38:36.811767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:51.840 [2024-07-24 21:38:36.816945] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.840 [2024-07-24 21:38:36.816976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.840 [2024-07-24 21:38:36.816988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:51.840 [2024-07-24 21:38:36.821722] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.840 [2024-07-24 21:38:36.821749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.840 [2024-07-24 21:38:36.821762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.840 [2024-07-24 21:38:36.826115] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.840 [2024-07-24 21:38:36.826143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.840 [2024-07-24 21:38:36.826157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:51.840 [2024-07-24 21:38:36.830407] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.840 [2024-07-24 21:38:36.830436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.840 [2024-07-24 21:38:36.830447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:51.840 [2024-07-24 21:38:36.834923] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:51.840 [2024-07-24 21:38:36.834952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.840 [2024-07-24 21:38:36.834963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.101 [2024-07-24 21:38:36.839772] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:52.101 [2024-07-24 21:38:36.839800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.101 [2024-07-24 21:38:36.839811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.101 [2024-07-24 21:38:36.844332] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:52.101 [2024-07-24 21:38:36.844376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.101 [2024-07-24 21:38:36.844388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.101 [2024-07-24 21:38:36.849224] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:52.101 [2024-07-24 21:38:36.849253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.101 [2024-07-24 21:38:36.849264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.101 [2024-07-24 21:38:36.853864] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:52.101 [2024-07-24 21:38:36.853894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.101 [2024-07-24 21:38:36.853906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.101 [2024-07-24 21:38:36.858352] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:52.101 [2024-07-24 21:38:36.858381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.101 [2024-07-24 21:38:36.858408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.101 [2024-07-24 21:38:36.863116] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:52.101 [2024-07-24 21:38:36.863146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.101 [2024-07-24 21:38:36.863158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.101 [2024-07-24 21:38:36.867593] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:52.101 [2024-07-24 21:38:36.867639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.101 [2024-07-24 21:38:36.867652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.101 [2024-07-24 21:38:36.872015] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:52.101 [2024-07-24 21:38:36.872045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.101 [2024-07-24 21:38:36.872059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.101 [2024-07-24 21:38:36.876387] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:52.101 [2024-07-24 21:38:36.876416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.101 [2024-07-24 21:38:36.876429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.101 [2024-07-24 21:38:36.880999] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:52.101 [2024-07-24 21:38:36.881027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.101 [2024-07-24 21:38:36.881039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.101 [2024-07-24 21:38:36.885458] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:52.101 [2024-07-24 21:38:36.885487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.101 [2024-07-24 21:38:36.885499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.101 [2024-07-24 21:38:36.890018] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:52.101 [2024-07-24 21:38:36.890046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.101 [2024-07-24 21:38:36.890063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.101 [2024-07-24 21:38:36.894417] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:52.101 [2024-07-24 21:38:36.894445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.101 [2024-07-24 21:38:36.894457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.101 [2024-07-24 21:38:36.898860] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:52.101 [2024-07-24 21:38:36.898889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.101 [2024-07-24 21:38:36.898901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.101 [2024-07-24 21:38:36.903334] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:52.101 [2024-07-24 21:38:36.903378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.101 [2024-07-24 21:38:36.903389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.101 [2024-07-24 21:38:36.907938] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:52.101 [2024-07-24 21:38:36.907967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.101 [2024-07-24 21:38:36.907980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.101 [2024-07-24 21:38:36.912510] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:52.101 [2024-07-24 21:38:36.912539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.101 [2024-07-24 21:38:36.912551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.101 [2024-07-24 21:38:36.917607] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:52.101 [2024-07-24 21:38:36.917654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.101 [2024-07-24 21:38:36.917668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.101 [2024-07-24 21:38:36.922463] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:52.101 [2024-07-24 21:38:36.922493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.101 [2024-07-24 21:38:36.922505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.101 [2024-07-24 21:38:36.928010] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:52.101 [2024-07-24 21:38:36.928047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.101 [2024-07-24 21:38:36.928058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.101 [2024-07-24 21:38:36.933422] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:52.101 [2024-07-24 21:38:36.933454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.101 [2024-07-24 21:38:36.933466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.101 [2024-07-24 21:38:36.938261] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:52.101 [2024-07-24 21:38:36.938290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.101 [2024-07-24 21:38:36.938301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.101 [2024-07-24 21:38:36.943096] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:52.101 [2024-07-24 21:38:36.943141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.101 [2024-07-24 21:38:36.943153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.101 [2024-07-24 21:38:36.948299] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:52.101 [2024-07-24 21:38:36.948330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.101 [2024-07-24 21:38:36.948351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.101 [2024-07-24 21:38:36.953246] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:52.101 [2024-07-24 21:38:36.953274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.101 [2024-07-24 21:38:36.953286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.101 [2024-07-24 21:38:36.958229] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:52.101 [2024-07-24 21:38:36.958257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.101 [2024-07-24 21:38:36.958269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.101 [2024-07-24 21:38:36.962949] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:52.102 [2024-07-24 21:38:36.962977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.102 [2024-07-24 21:38:36.962988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.102 [2024-07-24 21:38:36.967569] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:52.102 [2024-07-24 21:38:36.967598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.102 [2024-07-24 21:38:36.967611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.102 [2024-07-24 21:38:36.972082] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:52.102 [2024-07-24 21:38:36.972115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.102 [2024-07-24 21:38:36.972137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.102 [2024-07-24 21:38:36.976649] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:52.102 [2024-07-24 21:38:36.976676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.102 [2024-07-24 21:38:36.976690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.102 [2024-07-24 21:38:36.981023] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:52.102 [2024-07-24 21:38:36.981051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.102 [2024-07-24 21:38:36.981065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.102 [2024-07-24 21:38:36.985440] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:52.102 [2024-07-24 21:38:36.985468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.102 [2024-07-24 21:38:36.985479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.102 [2024-07-24 21:38:36.990014] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:52.102 [2024-07-24 21:38:36.990043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.102 [2024-07-24 21:38:36.990058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.102 [2024-07-24 21:38:36.994399] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:52.102 [2024-07-24 21:38:36.994427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.102 [2024-07-24 21:38:36.994439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.102 [2024-07-24 21:38:36.998839] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:52.102 [2024-07-24 21:38:36.998867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.102 [2024-07-24 21:38:36.998878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.102 [2024-07-24 21:38:37.003180] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:52.102 [2024-07-24 21:38:37.003209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.102 [2024-07-24 21:38:37.003220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.102 [2024-07-24 21:38:37.007683] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:52.102 [2024-07-24 21:38:37.007710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.102 [2024-07-24 21:38:37.007721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.102 [2024-07-24 21:38:37.012047] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:52.102 [2024-07-24 21:38:37.012076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.102 [2024-07-24 21:38:37.012088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.102 [2024-07-24 21:38:37.016537] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:52.102 [2024-07-24 21:38:37.016566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.102 [2024-07-24 21:38:37.016577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.102 [2024-07-24 21:38:37.020955] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:52.102 [2024-07-24 21:38:37.020983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.102 [2024-07-24 21:38:37.020995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.102 [2024-07-24 21:38:37.025322] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:52.102 [2024-07-24 21:38:37.025350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.102 [2024-07-24 21:38:37.025364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.102 [2024-07-24 21:38:37.029752] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:52.102 [2024-07-24 21:38:37.029779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.102 [2024-07-24 21:38:37.029791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.102 [2024-07-24 21:38:37.034056] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:52.102 [2024-07-24 21:38:37.034084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.102 [2024-07-24 21:38:37.034097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.102 [2024-07-24 21:38:37.038537] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:52.102 [2024-07-24 21:38:37.038565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.102 [2024-07-24 21:38:37.038577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.102 [2024-07-24 21:38:37.042881] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:52.102 [2024-07-24 21:38:37.042908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.102 [2024-07-24 21:38:37.042921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.102 [2024-07-24 21:38:37.047450] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:52.102 [2024-07-24 21:38:37.047478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.102 [2024-07-24 21:38:37.047490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.102 [2024-07-24 21:38:37.051928] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:52.102 [2024-07-24 21:38:37.051956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.102 [2024-07-24 21:38:37.051969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.102 [2024-07-24 21:38:37.056509] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:52.102 [2024-07-24 21:38:37.056538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.102 [2024-07-24 21:38:37.056550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.102 [2024-07-24 21:38:37.061601] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:52.102 [2024-07-24 21:38:37.061645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.102 [2024-07-24 21:38:37.061656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.102 [2024-07-24 21:38:37.066207] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:52.102 [2024-07-24 21:38:37.066235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.102 [2024-07-24 21:38:37.066245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.102 [2024-07-24 21:38:37.070703] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:52.102 [2024-07-24 21:38:37.070742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.102 [2024-07-24 21:38:37.070753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.102 [2024-07-24 21:38:37.075230] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:52.102 [2024-07-24 21:38:37.075257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.102 [2024-07-24 21:38:37.075268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.102 [2024-07-24 21:38:37.079596] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:52.102 [2024-07-24 21:38:37.079639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.102 [2024-07-24 21:38:37.079651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.103 [2024-07-24 21:38:37.083915] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:52.103 [2024-07-24 21:38:37.083942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.103 [2024-07-24 21:38:37.083954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.103 [2024-07-24 21:38:37.088472] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:52.103 [2024-07-24 21:38:37.088500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.103 [2024-07-24 21:38:37.088512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.103 [2024-07-24 21:38:37.093403] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:52.103 [2024-07-24 21:38:37.093433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.103 [2024-07-24 21:38:37.093445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.103 [2024-07-24 21:38:37.098758] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:52.103 [2024-07-24 21:38:37.098787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.103 [2024-07-24 21:38:37.098807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.363 [2024-07-24 21:38:37.103723] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:52.363 [2024-07-24 21:38:37.103751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.363 [2024-07-24 21:38:37.103765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.363 [2024-07-24 21:38:37.108672] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:52.363 [2024-07-24 21:38:37.108699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.363 [2024-07-24 21:38:37.108728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.363 [2024-07-24 21:38:37.113745] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:52.363 [2024-07-24 21:38:37.113776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.363 [2024-07-24 21:38:37.113789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.363 [2024-07-24 21:38:37.118455] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:52.363 [2024-07-24 21:38:37.118485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.363 [2024-07-24 21:38:37.118497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.363 [2024-07-24 21:38:37.123211] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:52.363 [2024-07-24 21:38:37.123241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.363 [2024-07-24 21:38:37.123286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.363 [2024-07-24 21:38:37.127837] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:52.363 [2024-07-24 21:38:37.127881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.363 [2024-07-24 21:38:37.127892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.363 [2024-07-24 21:38:37.132313] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:52.363 [2024-07-24 21:38:37.132341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.363 [2024-07-24 21:38:37.132352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.363 [2024-07-24 21:38:37.137183] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:52.363 [2024-07-24 21:38:37.137210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.363 [2024-07-24 21:38:37.137226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.363 [2024-07-24 21:38:37.141944] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:52.363 [2024-07-24 21:38:37.141973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.363 [2024-07-24 21:38:37.141984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.363 [2024-07-24 21:38:37.146607] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:52.363 [2024-07-24 21:38:37.146652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.364 [2024-07-24 21:38:37.146679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.364 [2024-07-24 21:38:37.152276] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:52.364 [2024-07-24 21:38:37.152306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.364 [2024-07-24 21:38:37.152318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.364 [2024-07-24 21:38:37.157317] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:52.364 [2024-07-24 21:38:37.157345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.364 [2024-07-24 21:38:37.157357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.364 [2024-07-24 21:38:37.162202] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:52.364 [2024-07-24 21:38:37.162230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.364 [2024-07-24 21:38:37.162243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.364 [2024-07-24 21:38:37.167017] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:52.364 [2024-07-24 21:38:37.167071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.364 [2024-07-24 21:38:37.167084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.364 [2024-07-24 21:38:37.171691] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:52.364 [2024-07-24 21:38:37.171730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.364 [2024-07-24 21:38:37.171753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.364 [2024-07-24 21:38:37.176360] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:52.364 [2024-07-24 21:38:37.176387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.364 [2024-07-24 21:38:37.176400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.364 [2024-07-24 21:38:37.180993] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:52.364 [2024-07-24 21:38:37.181021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.364 [2024-07-24 21:38:37.181035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.364 [2024-07-24 21:38:37.185340] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:52.364 [2024-07-24 21:38:37.185368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.364 [2024-07-24 21:38:37.185380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.364 [2024-07-24 21:38:37.189868] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:52.364 [2024-07-24 21:38:37.189911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.364 [2024-07-24 21:38:37.189924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.364 [2024-07-24 21:38:37.194485] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:52.364 [2024-07-24 21:38:37.194512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.364 [2024-07-24 21:38:37.194524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.364 [2024-07-24 21:38:37.198999] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:52.364 [2024-07-24 21:38:37.199035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.364 [2024-07-24 21:38:37.199063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.364 [2024-07-24 21:38:37.203373] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:52.364 [2024-07-24 21:38:37.203427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.364 [2024-07-24 21:38:37.203437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.364 [2024-07-24 21:38:37.207916] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:52.364 [2024-07-24 21:38:37.207944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.364 [2024-07-24 21:38:37.207957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.364 [2024-07-24 21:38:37.212265] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:52.364 [2024-07-24 21:38:37.212293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.364 [2024-07-24 21:38:37.212303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.364 [2024-07-24 21:38:37.216785] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:52.364 [2024-07-24 21:38:37.216812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.364 [2024-07-24 21:38:37.216824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.364 [2024-07-24 21:38:37.221058] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:52.364 [2024-07-24 21:38:37.221086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.364 [2024-07-24 21:38:37.221098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.364 [2024-07-24 21:38:37.225664] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:52.364 [2024-07-24 21:38:37.225692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.364 [2024-07-24 21:38:37.225704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.364 [2024-07-24 21:38:37.230532] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:52.364 [2024-07-24 21:38:37.230578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.364 [2024-07-24 21:38:37.230590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.364 [2024-07-24 21:38:37.235807] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:52.364 [2024-07-24 21:38:37.235838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.364 [2024-07-24 21:38:37.235850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.364 [2024-07-24 21:38:37.241137] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:52.364 [2024-07-24 21:38:37.241167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.364 [2024-07-24 21:38:37.241178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.364 [2024-07-24 21:38:37.246607] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:52.364 [2024-07-24 21:38:37.246673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.364 [2024-07-24 21:38:37.246687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.364 [2024-07-24 21:38:37.251798] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:52.364 [2024-07-24 21:38:37.251830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.364 [2024-07-24 21:38:37.251843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.364 [2024-07-24 21:38:37.257095] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:52.364 [2024-07-24 21:38:37.257127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.364 [2024-07-24 21:38:37.257140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.364 [2024-07-24 21:38:37.262604] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:52.364 [2024-07-24 21:38:37.262650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.364 [2024-07-24 21:38:37.262671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.364 [2024-07-24 21:38:37.267851] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:52.364 [2024-07-24 21:38:37.267881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.364 [2024-07-24 21:38:37.267924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.364 [2024-07-24 21:38:37.272819] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15db200) 00:16:52.364 [2024-07-24 21:38:37.272848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.364 [2024-07-24 21:38:37.272860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.364 00:16:52.364 Latency(us) 00:16:52.364 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:52.365 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:16:52.365 nvme0n1 : 2.00 6711.33 838.92 0.00 0.00 2380.97 1951.19 8340.95 00:16:52.365 =================================================================================================================== 00:16:52.365 Total : 6711.33 838.92 0.00 0.00 2380.97 1951.19 8340.95 00:16:52.365 0 00:16:52.365 21:38:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:16:52.365 21:38:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:16:52.365 | .driver_specific 00:16:52.365 | .nvme_error 00:16:52.365 | .status_code 00:16:52.365 | .command_transient_transport_error' 00:16:52.365 21:38:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:16:52.365 21:38:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:16:52.623 21:38:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 433 > 0 )) 00:16:52.623 21:38:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 79507 00:16:52.623 21:38:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 79507 ']' 00:16:52.623 21:38:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 79507 00:16:52.623 21:38:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:16:52.623 21:38:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:52.624 21:38:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79507 00:16:52.624 21:38:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:16:52.624 21:38:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:16:52.624 killing process with pid 79507 00:16:52.624 21:38:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79507' 00:16:52.624 21:38:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 79507 00:16:52.624 Received shutdown signal, test time was about 2.000000 seconds 00:16:52.624 00:16:52.624 Latency(us) 00:16:52.624 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:52.624 =================================================================================================================== 00:16:52.624 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:52.624 21:38:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 79507 00:16:52.883 21:38:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:16:52.883 21:38:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:16:52.883 21:38:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:16:52.883 21:38:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:16:52.883 21:38:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:16:52.883 21:38:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=79566 00:16:52.883 21:38:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 79566 /var/tmp/bperf.sock 00:16:52.883 21:38:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:16:52.883 21:38:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 79566 ']' 00:16:52.883 21:38:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:52.883 21:38:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:52.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:52.883 21:38:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:52.883 21:38:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:52.883 21:38:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:53.142 [2024-07-24 21:38:37.913123] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:16:53.142 [2024-07-24 21:38:37.913233] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79566 ] 00:16:53.142 [2024-07-24 21:38:38.047078] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:53.142 [2024-07-24 21:38:38.143882] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:53.400 [2024-07-24 21:38:38.224108] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:53.968 21:38:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:53.968 21:38:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:16:53.968 21:38:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:53.968 21:38:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:54.227 21:38:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:16:54.227 21:38:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.227 21:38:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:54.227 21:38:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.227 21:38:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:54.227 21:38:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:54.486 nvme0n1 00:16:54.486 21:38:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:16:54.486 21:38:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.486 21:38:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:54.486 21:38:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.486 21:38:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:16:54.486 21:38:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:54.486 Running I/O for 2 seconds... 00:16:54.486 [2024-07-24 21:38:39.436418] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190fef90 00:16:54.486 [2024-07-24 21:38:39.438640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:54.486 [2024-07-24 21:38:39.438722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:54.486 [2024-07-24 21:38:39.450524] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190feb58 00:16:54.486 [2024-07-24 21:38:39.452856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6485 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:54.486 [2024-07-24 21:38:39.452890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:16:54.487 [2024-07-24 21:38:39.465001] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190fe2e8 00:16:54.487 [2024-07-24 21:38:39.467283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:20646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:54.487 [2024-07-24 21:38:39.467321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:16:54.487 [2024-07-24 21:38:39.479722] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190fda78 00:16:54.487 [2024-07-24 21:38:39.481982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:21700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:54.487 [2024-07-24 21:38:39.482013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:16:54.746 [2024-07-24 21:38:39.494808] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190fd208 00:16:54.746 [2024-07-24 21:38:39.497139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:54.746 [2024-07-24 21:38:39.497168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:16:54.747 [2024-07-24 21:38:39.509400] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190fc998 00:16:54.747 [2024-07-24 21:38:39.512011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:54.747 [2024-07-24 21:38:39.512060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:16:54.747 [2024-07-24 21:38:39.524263] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190fc128 00:16:54.747 [2024-07-24 21:38:39.526493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:4482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:54.747 [2024-07-24 21:38:39.526524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:16:54.747 [2024-07-24 21:38:39.539403] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190fb8b8 00:16:54.747 [2024-07-24 21:38:39.541683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:54.747 [2024-07-24 21:38:39.541713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:16:54.747 [2024-07-24 21:38:39.553890] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190fb048 00:16:54.747 [2024-07-24 21:38:39.556213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:16993 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:54.747 [2024-07-24 21:38:39.556257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:16:54.747 [2024-07-24 21:38:39.568390] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190fa7d8 00:16:54.747 [2024-07-24 21:38:39.570615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:1078 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:54.747 [2024-07-24 21:38:39.570677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:16:54.747 [2024-07-24 21:38:39.582823] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190f9f68 00:16:54.747 [2024-07-24 21:38:39.585193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:8854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:54.747 [2024-07-24 21:38:39.585224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:16:54.747 [2024-07-24 21:38:39.598270] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190f96f8 00:16:54.747 [2024-07-24 21:38:39.600566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:54.747 [2024-07-24 21:38:39.600598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:16:54.747 [2024-07-24 21:38:39.613227] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190f8e88 00:16:54.747 [2024-07-24 21:38:39.615603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:6513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:54.747 [2024-07-24 21:38:39.615642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:16:54.747 [2024-07-24 21:38:39.627981] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190f8618 00:16:54.747 [2024-07-24 21:38:39.630051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:22139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:54.747 [2024-07-24 21:38:39.630084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:16:54.747 [2024-07-24 21:38:39.642754] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190f7da8 00:16:54.747 [2024-07-24 21:38:39.645087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:11137 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:54.747 [2024-07-24 21:38:39.645118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:16:54.747 [2024-07-24 21:38:39.658375] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190f7538 00:16:54.747 [2024-07-24 21:38:39.660857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:10786 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:54.747 [2024-07-24 21:38:39.660888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:16:54.747 [2024-07-24 21:38:39.673636] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190f6cc8 00:16:54.747 [2024-07-24 21:38:39.675982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:3524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:54.747 [2024-07-24 21:38:39.676027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:54.747 [2024-07-24 21:38:39.689550] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190f6458 00:16:54.747 [2024-07-24 21:38:39.691824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:7132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:54.747 [2024-07-24 21:38:39.691857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:16:54.747 [2024-07-24 21:38:39.704624] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190f5be8 00:16:54.747 [2024-07-24 21:38:39.706928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:11223 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:54.747 [2024-07-24 21:38:39.706974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:16:54.747 [2024-07-24 21:38:39.719731] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190f5378 00:16:54.747 [2024-07-24 21:38:39.721883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:6073 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:54.747 [2024-07-24 21:38:39.721913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:16:54.747 [2024-07-24 21:38:39.733931] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190f4b08 00:16:54.747 [2024-07-24 21:38:39.735963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:20773 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:54.747 [2024-07-24 21:38:39.735992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:16:55.007 [2024-07-24 21:38:39.748430] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190f4298 00:16:55.007 [2024-07-24 21:38:39.750501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:25037 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.007 [2024-07-24 21:38:39.750531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:16:55.007 [2024-07-24 21:38:39.763668] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190f3a28 00:16:55.007 [2024-07-24 21:38:39.765662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:12568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.007 [2024-07-24 21:38:39.765729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:16:55.007 [2024-07-24 21:38:39.777878] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190f31b8 00:16:55.007 [2024-07-24 21:38:39.779904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:17382 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.007 [2024-07-24 21:38:39.779945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:16:55.007 [2024-07-24 21:38:39.792019] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190f2948 00:16:55.007 [2024-07-24 21:38:39.793949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22091 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.007 [2024-07-24 21:38:39.793991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:16:55.007 [2024-07-24 21:38:39.806170] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190f20d8 00:16:55.007 [2024-07-24 21:38:39.808368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:25140 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.007 [2024-07-24 21:38:39.808397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:16:55.007 [2024-07-24 21:38:39.821823] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190f1868 00:16:55.007 [2024-07-24 21:38:39.823936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:3254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.007 [2024-07-24 21:38:39.823985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:16:55.007 [2024-07-24 21:38:39.836592] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190f0ff8 00:16:55.007 [2024-07-24 21:38:39.838537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:20229 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.007 [2024-07-24 21:38:39.838582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:16:55.007 [2024-07-24 21:38:39.851735] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190f0788 00:16:55.007 [2024-07-24 21:38:39.853658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:3809 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.007 [2024-07-24 21:38:39.853699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:16:55.007 [2024-07-24 21:38:39.866267] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190eff18 00:16:55.007 [2024-07-24 21:38:39.868303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:16743 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.007 [2024-07-24 21:38:39.868349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:16:55.007 [2024-07-24 21:38:39.881385] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190ef6a8 00:16:55.007 [2024-07-24 21:38:39.883449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.007 [2024-07-24 21:38:39.883493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:16:55.007 [2024-07-24 21:38:39.896672] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190eee38 00:16:55.007 [2024-07-24 21:38:39.898503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:14052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.007 [2024-07-24 21:38:39.898533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:16:55.007 [2024-07-24 21:38:39.911436] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190ee5c8 00:16:55.007 [2024-07-24 21:38:39.913389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:22912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.008 [2024-07-24 21:38:39.913418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:55.008 [2024-07-24 21:38:39.926225] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190edd58 00:16:55.008 [2024-07-24 21:38:39.928125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:5162 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.008 [2024-07-24 21:38:39.928154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:16:55.008 [2024-07-24 21:38:39.940737] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190ed4e8 00:16:55.008 [2024-07-24 21:38:39.942508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:22655 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.008 [2024-07-24 21:38:39.942538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:16:55.008 [2024-07-24 21:38:39.956019] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190ecc78 00:16:55.008 [2024-07-24 21:38:39.957829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:19649 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.008 [2024-07-24 21:38:39.957859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:16:55.008 [2024-07-24 21:38:39.970758] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190ec408 00:16:55.008 [2024-07-24 21:38:39.972633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:21425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.008 [2024-07-24 21:38:39.972674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:16:55.008 [2024-07-24 21:38:39.985396] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190ebb98 00:16:55.008 [2024-07-24 21:38:39.987206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:25013 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.008 [2024-07-24 21:38:39.987238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:16:55.008 [2024-07-24 21:38:40.000085] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190eb328 00:16:55.008 [2024-07-24 21:38:40.001844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:7484 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.008 [2024-07-24 21:38:40.001875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:16:55.268 [2024-07-24 21:38:40.015137] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190eaab8 00:16:55.268 [2024-07-24 21:38:40.016721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:20298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.268 [2024-07-24 21:38:40.016751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:16:55.268 [2024-07-24 21:38:40.028161] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190ea248 00:16:55.268 [2024-07-24 21:38:40.029677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:4079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.268 [2024-07-24 21:38:40.029707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:16:55.268 [2024-07-24 21:38:40.041487] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190e99d8 00:16:55.268 [2024-07-24 21:38:40.043225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:25003 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.268 [2024-07-24 21:38:40.043262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:55.268 [2024-07-24 21:38:40.055377] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190e9168 00:16:55.268 [2024-07-24 21:38:40.057166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:10620 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.268 [2024-07-24 21:38:40.057199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:55.268 [2024-07-24 21:38:40.070805] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190e88f8 00:16:55.268 [2024-07-24 21:38:40.072529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:4250 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.268 [2024-07-24 21:38:40.072574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:16:55.268 [2024-07-24 21:38:40.085803] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190e8088 00:16:55.268 [2024-07-24 21:38:40.087415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:9123 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.268 [2024-07-24 21:38:40.087455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:16:55.268 [2024-07-24 21:38:40.100527] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190e7818 00:16:55.268 [2024-07-24 21:38:40.102081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:16930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.268 [2024-07-24 21:38:40.102115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:55.268 [2024-07-24 21:38:40.113984] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190e6fa8 00:16:55.268 [2024-07-24 21:38:40.115505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:5217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.268 [2024-07-24 21:38:40.115548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:55.268 [2024-07-24 21:38:40.127508] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190e6738 00:16:55.268 [2024-07-24 21:38:40.129021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:14402 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.268 [2024-07-24 21:38:40.129058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:55.269 [2024-07-24 21:38:40.141551] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190e5ec8 00:16:55.269 [2024-07-24 21:38:40.143083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:5413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.269 [2024-07-24 21:38:40.143119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:55.269 [2024-07-24 21:38:40.154935] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190e5658 00:16:55.269 [2024-07-24 21:38:40.156427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:8609 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.269 [2024-07-24 21:38:40.156458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:16:55.269 [2024-07-24 21:38:40.168272] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190e4de8 00:16:55.269 [2024-07-24 21:38:40.169673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:23803 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.269 [2024-07-24 21:38:40.169703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:16:55.269 [2024-07-24 21:38:40.181791] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190e4578 00:16:55.269 [2024-07-24 21:38:40.183155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:2365 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.269 [2024-07-24 21:38:40.183187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:16:55.269 [2024-07-24 21:38:40.196089] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190e3d08 00:16:55.269 [2024-07-24 21:38:40.197514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:6591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.269 [2024-07-24 21:38:40.197550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:16:55.269 [2024-07-24 21:38:40.211765] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190e3498 00:16:55.269 [2024-07-24 21:38:40.213245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:24860 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.269 [2024-07-24 21:38:40.213293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:16:55.269 [2024-07-24 21:38:40.228172] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190e2c28 00:16:55.269 [2024-07-24 21:38:40.229673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:24653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.269 [2024-07-24 21:38:40.229712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:16:55.269 [2024-07-24 21:38:40.243725] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190e23b8 00:16:55.269 [2024-07-24 21:38:40.245284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:11257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.269 [2024-07-24 21:38:40.245319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:16:55.269 [2024-07-24 21:38:40.259511] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190e1b48 00:16:55.269 [2024-07-24 21:38:40.261010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:4013 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.269 [2024-07-24 21:38:40.261042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:16:55.528 [2024-07-24 21:38:40.275013] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190e12d8 00:16:55.528 [2024-07-24 21:38:40.276411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:24057 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.528 [2024-07-24 21:38:40.276451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:16:55.528 [2024-07-24 21:38:40.289222] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190e0a68 00:16:55.528 [2024-07-24 21:38:40.290486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:17224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.528 [2024-07-24 21:38:40.290513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:16:55.528 [2024-07-24 21:38:40.305096] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190e01f8 00:16:55.528 [2024-07-24 21:38:40.306581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:6229 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.528 [2024-07-24 21:38:40.306647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:16:55.528 [2024-07-24 21:38:40.321824] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190df988 00:16:55.528 [2024-07-24 21:38:40.323262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:17429 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.528 [2024-07-24 21:38:40.323293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:16:55.528 [2024-07-24 21:38:40.337761] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190df118 00:16:55.528 [2024-07-24 21:38:40.339166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:23565 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.528 [2024-07-24 21:38:40.339196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:16:55.528 [2024-07-24 21:38:40.354113] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190de8a8 00:16:55.528 [2024-07-24 21:38:40.355494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:15565 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.528 [2024-07-24 21:38:40.355521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:16:55.528 [2024-07-24 21:38:40.369245] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190de038 00:16:55.528 [2024-07-24 21:38:40.370540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.528 [2024-07-24 21:38:40.370604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:16:55.528 [2024-07-24 21:38:40.390342] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190de038 00:16:55.528 [2024-07-24 21:38:40.392739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:9491 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.528 [2024-07-24 21:38:40.392772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.528 [2024-07-24 21:38:40.405204] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190de8a8 00:16:55.528 [2024-07-24 21:38:40.407618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:8095 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.528 [2024-07-24 21:38:40.407654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:16:55.528 [2024-07-24 21:38:40.419864] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190df118 00:16:55.528 [2024-07-24 21:38:40.422526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:13230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.528 [2024-07-24 21:38:40.422554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:16:55.528 [2024-07-24 21:38:40.435812] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190df988 00:16:55.528 [2024-07-24 21:38:40.438300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:13964 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.528 [2024-07-24 21:38:40.438338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:16:55.528 [2024-07-24 21:38:40.451387] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190e01f8 00:16:55.528 [2024-07-24 21:38:40.453669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:14255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.528 [2024-07-24 21:38:40.453696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:16:55.528 [2024-07-24 21:38:40.465458] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190e0a68 00:16:55.528 [2024-07-24 21:38:40.467743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:11106 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.528 [2024-07-24 21:38:40.467782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:16:55.528 [2024-07-24 21:38:40.479528] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190e12d8 00:16:55.528 [2024-07-24 21:38:40.481795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:16556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.529 [2024-07-24 21:38:40.481821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:16:55.529 [2024-07-24 21:38:40.493413] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190e1b48 00:16:55.529 [2024-07-24 21:38:40.495745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:9045 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.529 [2024-07-24 21:38:40.495785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:16:55.529 [2024-07-24 21:38:40.507605] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190e23b8 00:16:55.529 [2024-07-24 21:38:40.509748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:16868 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.529 [2024-07-24 21:38:40.509775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:16:55.529 [2024-07-24 21:38:40.521134] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190e2c28 00:16:55.529 [2024-07-24 21:38:40.523214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:8830 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.529 [2024-07-24 21:38:40.523241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:16:55.788 [2024-07-24 21:38:40.534764] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190e3498 00:16:55.788 [2024-07-24 21:38:40.536801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:10361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.788 [2024-07-24 21:38:40.536827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:16:55.788 [2024-07-24 21:38:40.548319] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190e3d08 00:16:55.788 [2024-07-24 21:38:40.550339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:18321 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.788 [2024-07-24 21:38:40.550365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:55.788 [2024-07-24 21:38:40.561509] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190e4578 00:16:55.788 [2024-07-24 21:38:40.563587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:9370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.788 [2024-07-24 21:38:40.563616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:16:55.788 [2024-07-24 21:38:40.574853] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190e4de8 00:16:55.788 [2024-07-24 21:38:40.576845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:5768 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.788 [2024-07-24 21:38:40.576872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:16:55.788 [2024-07-24 21:38:40.587967] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190e5658 00:16:55.788 [2024-07-24 21:38:40.589884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:13750 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.788 [2024-07-24 21:38:40.589911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:16:55.788 [2024-07-24 21:38:40.601089] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190e5ec8 00:16:55.788 [2024-07-24 21:38:40.603046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:10833 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.788 [2024-07-24 21:38:40.603078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:55.788 [2024-07-24 21:38:40.614245] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190e6738 00:16:55.788 [2024-07-24 21:38:40.616256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:11692 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.788 [2024-07-24 21:38:40.616283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:16:55.788 [2024-07-24 21:38:40.627615] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190e6fa8 00:16:55.788 [2024-07-24 21:38:40.629454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:6532 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.788 [2024-07-24 21:38:40.629480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:16:55.788 [2024-07-24 21:38:40.640883] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190e7818 00:16:55.788 [2024-07-24 21:38:40.642725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:4854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.788 [2024-07-24 21:38:40.642746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:16:55.788 [2024-07-24 21:38:40.653966] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190e8088 00:16:55.788 [2024-07-24 21:38:40.655923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:10768 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.788 [2024-07-24 21:38:40.655950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:16:55.788 [2024-07-24 21:38:40.667726] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190e88f8 00:16:55.788 [2024-07-24 21:38:40.669516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:4578 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.788 [2024-07-24 21:38:40.669543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:16:55.788 [2024-07-24 21:38:40.681106] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190e9168 00:16:55.788 [2024-07-24 21:38:40.682921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:5097 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.788 [2024-07-24 21:38:40.682948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:16:55.788 [2024-07-24 21:38:40.694373] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190e99d8 00:16:55.788 [2024-07-24 21:38:40.696398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:10425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.788 [2024-07-24 21:38:40.696425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:16:55.788 [2024-07-24 21:38:40.707903] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190ea248 00:16:55.788 [2024-07-24 21:38:40.709697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:5820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.788 [2024-07-24 21:38:40.709722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:16:55.788 [2024-07-24 21:38:40.721559] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190eaab8 00:16:55.788 [2024-07-24 21:38:40.723503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:24310 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.788 [2024-07-24 21:38:40.723535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:16:55.788 [2024-07-24 21:38:40.734920] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190eb328 00:16:55.788 [2024-07-24 21:38:40.736732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:15004 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.788 [2024-07-24 21:38:40.736759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:16:55.788 [2024-07-24 21:38:40.748195] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190ebb98 00:16:55.788 [2024-07-24 21:38:40.749973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:15217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.788 [2024-07-24 21:38:40.749999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:16:55.788 [2024-07-24 21:38:40.762175] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190ec408 00:16:55.788 [2024-07-24 21:38:40.764089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:22451 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.788 [2024-07-24 21:38:40.764116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:55.788 [2024-07-24 21:38:40.776516] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190ecc78 00:16:55.788 [2024-07-24 21:38:40.778658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:16785 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.788 [2024-07-24 21:38:40.778683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:16:56.047 [2024-07-24 21:38:40.791308] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190ed4e8 00:16:56.048 [2024-07-24 21:38:40.793226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:18674 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.048 [2024-07-24 21:38:40.793264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:16:56.048 [2024-07-24 21:38:40.804955] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190edd58 00:16:56.048 [2024-07-24 21:38:40.806527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13694 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.048 [2024-07-24 21:38:40.806554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:16:56.048 [2024-07-24 21:38:40.818207] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190ee5c8 00:16:56.048 [2024-07-24 21:38:40.819911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17858 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.048 [2024-07-24 21:38:40.819937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:56.048 [2024-07-24 21:38:40.831607] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190eee38 00:16:56.048 [2024-07-24 21:38:40.833194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:4942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.048 [2024-07-24 21:38:40.833221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:16:56.048 [2024-07-24 21:38:40.844857] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190ef6a8 00:16:56.048 [2024-07-24 21:38:40.846494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20145 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.048 [2024-07-24 21:38:40.846531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:16:56.048 [2024-07-24 21:38:40.858309] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190eff18 00:16:56.048 [2024-07-24 21:38:40.859986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:4404 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.048 [2024-07-24 21:38:40.860013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:16:56.048 [2024-07-24 21:38:40.871715] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190f0788 00:16:56.048 [2024-07-24 21:38:40.873275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:7463 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.048 [2024-07-24 21:38:40.873302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:16:56.048 [2024-07-24 21:38:40.885030] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190f0ff8 00:16:56.048 [2024-07-24 21:38:40.886585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:18613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.048 [2024-07-24 21:38:40.886611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:16:56.048 [2024-07-24 21:38:40.898376] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190f1868 00:16:56.048 [2024-07-24 21:38:40.900027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:6671 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.048 [2024-07-24 21:38:40.900063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:16:56.048 [2024-07-24 21:38:40.911790] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190f20d8 00:16:56.048 [2024-07-24 21:38:40.913311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:6867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.048 [2024-07-24 21:38:40.913338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:16:56.048 [2024-07-24 21:38:40.924955] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190f2948 00:16:56.048 [2024-07-24 21:38:40.926484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:8079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.048 [2024-07-24 21:38:40.926510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:16:56.048 [2024-07-24 21:38:40.938503] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190f31b8 00:16:56.048 [2024-07-24 21:38:40.940189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:16192 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.048 [2024-07-24 21:38:40.940215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:16:56.048 [2024-07-24 21:38:40.951906] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190f3a28 00:16:56.048 [2024-07-24 21:38:40.953381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:25010 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.048 [2024-07-24 21:38:40.953407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:56.048 [2024-07-24 21:38:40.965195] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190f4298 00:16:56.048 [2024-07-24 21:38:40.966713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:9869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.048 [2024-07-24 21:38:40.966738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:56.048 [2024-07-24 21:38:40.978370] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190f4b08 00:16:56.048 [2024-07-24 21:38:40.979900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:1560 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.048 [2024-07-24 21:38:40.979925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:56.048 [2024-07-24 21:38:40.991704] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190f5378 00:16:56.048 [2024-07-24 21:38:40.993132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:4953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.048 [2024-07-24 21:38:40.993158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:56.048 [2024-07-24 21:38:41.004899] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190f5be8 00:16:56.048 [2024-07-24 21:38:41.006356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:22188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.048 [2024-07-24 21:38:41.006382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:56.048 [2024-07-24 21:38:41.018088] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190f6458 00:16:56.048 [2024-07-24 21:38:41.019641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:8835 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.048 [2024-07-24 21:38:41.019667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:56.048 [2024-07-24 21:38:41.031226] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190f6cc8 00:16:56.048 [2024-07-24 21:38:41.032689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:24399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.048 [2024-07-24 21:38:41.032710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:56.048 [2024-07-24 21:38:41.044488] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190f7538 00:16:56.048 [2024-07-24 21:38:41.045998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:2649 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.048 [2024-07-24 21:38:41.046031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:16:56.307 [2024-07-24 21:38:41.058094] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190f7da8 00:16:56.307 [2024-07-24 21:38:41.059582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:5766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.307 [2024-07-24 21:38:41.059608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:16:56.307 [2024-07-24 21:38:41.072066] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190f8618 00:16:56.307 [2024-07-24 21:38:41.073460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:9416 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.307 [2024-07-24 21:38:41.073486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:16:56.307 [2024-07-24 21:38:41.085819] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190f8e88 00:16:56.307 [2024-07-24 21:38:41.087266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:8498 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.307 [2024-07-24 21:38:41.087307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:16:56.307 [2024-07-24 21:38:41.100987] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190f96f8 00:16:56.307 [2024-07-24 21:38:41.102443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:6283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.307 [2024-07-24 21:38:41.102476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:16:56.307 [2024-07-24 21:38:41.115342] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190f9f68 00:16:56.307 [2024-07-24 21:38:41.116784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:9525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.307 [2024-07-24 21:38:41.116824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:16:56.307 [2024-07-24 21:38:41.129607] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190fa7d8 00:16:56.307 [2024-07-24 21:38:41.130910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:9564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.307 [2024-07-24 21:38:41.130964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:16:56.307 [2024-07-24 21:38:41.143336] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190fb048 00:16:56.307 [2024-07-24 21:38:41.144706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:10030 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.307 [2024-07-24 21:38:41.144734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:16:56.307 [2024-07-24 21:38:41.156805] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190fb8b8 00:16:56.307 [2024-07-24 21:38:41.158087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:19237 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.307 [2024-07-24 21:38:41.158113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:16:56.307 [2024-07-24 21:38:41.170148] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190fc128 00:16:56.307 [2024-07-24 21:38:41.171479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:21278 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.307 [2024-07-24 21:38:41.171517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:16:56.307 [2024-07-24 21:38:41.183993] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190fc998 00:16:56.307 [2024-07-24 21:38:41.185293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:7216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.307 [2024-07-24 21:38:41.185319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:16:56.307 [2024-07-24 21:38:41.197500] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190fd208 00:16:56.307 [2024-07-24 21:38:41.198760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:14369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.307 [2024-07-24 21:38:41.198788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:56.307 [2024-07-24 21:38:41.212261] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190fda78 00:16:56.307 [2024-07-24 21:38:41.213637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:9812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.307 [2024-07-24 21:38:41.213676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:16:56.307 [2024-07-24 21:38:41.227546] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190fe2e8 00:16:56.307 [2024-07-24 21:38:41.228728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:20247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.307 [2024-07-24 21:38:41.228766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:16:56.307 [2024-07-24 21:38:41.242051] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190feb58 00:16:56.307 [2024-07-24 21:38:41.243468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:21287 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.307 [2024-07-24 21:38:41.243503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:16:56.307 [2024-07-24 21:38:41.263429] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190fef90 00:16:56.307 [2024-07-24 21:38:41.265907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.307 [2024-07-24 21:38:41.265933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:56.307 [2024-07-24 21:38:41.277506] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190feb58 00:16:56.307 [2024-07-24 21:38:41.279831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:5935 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.307 [2024-07-24 21:38:41.279857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:16:56.307 [2024-07-24 21:38:41.290973] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190fe2e8 00:16:56.307 [2024-07-24 21:38:41.293245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:13412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.307 [2024-07-24 21:38:41.293270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:16:56.307 [2024-07-24 21:38:41.304521] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190fda78 00:16:56.307 [2024-07-24 21:38:41.306858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:6162 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.307 [2024-07-24 21:38:41.306884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:16:56.566 [2024-07-24 21:38:41.319444] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190fd208 00:16:56.566 [2024-07-24 21:38:41.321905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:10128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.566 [2024-07-24 21:38:41.321949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:16:56.566 [2024-07-24 21:38:41.335679] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190fc998 00:16:56.566 [2024-07-24 21:38:41.338106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:10690 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.566 [2024-07-24 21:38:41.338143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:16:56.566 [2024-07-24 21:38:41.350491] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190fc128 00:16:56.566 [2024-07-24 21:38:41.352842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:6331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.566 [2024-07-24 21:38:41.352871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:16:56.566 [2024-07-24 21:38:41.365018] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190fb8b8 00:16:56.566 [2024-07-24 21:38:41.367324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:9905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.566 [2024-07-24 21:38:41.367355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:16:56.566 [2024-07-24 21:38:41.380659] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190fb048 00:16:56.566 [2024-07-24 21:38:41.382703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:2808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.566 [2024-07-24 21:38:41.382732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:16:56.566 [2024-07-24 21:38:41.395578] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190fa7d8 00:16:56.566 [2024-07-24 21:38:41.397992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:9988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.566 [2024-07-24 21:38:41.398027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:16:56.566 [2024-07-24 21:38:41.411164] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14ca650) with pdu=0x2000190f9f68 00:16:56.566 [2024-07-24 21:38:41.413434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:23107 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.566 [2024-07-24 21:38:41.413460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:16:56.566 00:16:56.566 Latency(us) 00:16:56.566 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:56.566 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:56.566 nvme0n1 : 2.00 17669.01 69.02 0.00 0.00 7238.38 2502.28 27525.12 00:16:56.566 =================================================================================================================== 00:16:56.566 Total : 17669.01 69.02 0.00 0.00 7238.38 2502.28 27525.12 00:16:56.566 0 00:16:56.566 21:38:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:16:56.566 21:38:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:16:56.566 21:38:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:16:56.566 | .driver_specific 00:16:56.566 | .nvme_error 00:16:56.566 | .status_code 00:16:56.566 | .command_transient_transport_error' 00:16:56.566 21:38:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:16:56.825 21:38:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 138 > 0 )) 00:16:56.825 21:38:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 79566 00:16:56.825 21:38:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 79566 ']' 00:16:56.825 21:38:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 79566 00:16:56.825 21:38:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:16:56.825 21:38:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:56.825 21:38:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79566 00:16:56.825 21:38:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:16:56.825 killing process with pid 79566 00:16:56.825 Received shutdown signal, test time was about 2.000000 seconds 00:16:56.825 00:16:56.825 Latency(us) 00:16:56.825 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:56.825 =================================================================================================================== 00:16:56.825 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:56.825 21:38:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:16:56.825 21:38:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79566' 00:16:56.825 21:38:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 79566 00:16:56.825 21:38:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 79566 00:16:57.084 21:38:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:16:57.084 21:38:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:16:57.084 21:38:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:16:57.084 21:38:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:16:57.084 21:38:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:16:57.084 21:38:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=79622 00:16:57.084 21:38:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 79622 /var/tmp/bperf.sock 00:16:57.084 21:38:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:16:57.084 21:38:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 79622 ']' 00:16:57.084 21:38:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:57.084 21:38:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:57.084 21:38:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:57.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:57.084 21:38:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:57.084 21:38:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:57.084 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:57.084 Zero copy mechanism will not be used. 00:16:57.084 [2024-07-24 21:38:42.067118] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:16:57.084 [2024-07-24 21:38:42.067201] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79622 ] 00:16:57.343 [2024-07-24 21:38:42.203959] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:57.343 [2024-07-24 21:38:42.312858] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:57.602 [2024-07-24 21:38:42.391400] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:58.168 21:38:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:58.168 21:38:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:16:58.168 21:38:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:58.168 21:38:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:58.426 21:38:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:16:58.426 21:38:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.426 21:38:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:58.426 21:38:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.426 21:38:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:58.426 21:38:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:58.685 nvme0n1 00:16:58.685 21:38:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:16:58.685 21:38:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.685 21:38:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:58.685 21:38:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.685 21:38:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:16:58.685 21:38:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:58.685 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:58.685 Zero copy mechanism will not be used. 00:16:58.685 Running I/O for 2 seconds... 00:16:58.685 [2024-07-24 21:38:43.614456] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:58.685 [2024-07-24 21:38:43.614789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.685 [2024-07-24 21:38:43.614833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:58.685 [2024-07-24 21:38:43.619898] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:58.685 [2024-07-24 21:38:43.620187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.685 [2024-07-24 21:38:43.620219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:58.685 [2024-07-24 21:38:43.625044] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:58.685 [2024-07-24 21:38:43.625331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.685 [2024-07-24 21:38:43.625361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:58.685 [2024-07-24 21:38:43.630338] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:58.685 [2024-07-24 21:38:43.630636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.685 [2024-07-24 21:38:43.630666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:58.685 [2024-07-24 21:38:43.635547] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:58.685 [2024-07-24 21:38:43.635832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.685 [2024-07-24 21:38:43.635862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:58.685 [2024-07-24 21:38:43.640199] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:58.685 [2024-07-24 21:38:43.640277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.685 [2024-07-24 21:38:43.640301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:58.685 [2024-07-24 21:38:43.644950] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:58.685 [2024-07-24 21:38:43.645038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.685 [2024-07-24 21:38:43.645062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:58.685 [2024-07-24 21:38:43.649889] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:58.685 [2024-07-24 21:38:43.649966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.685 [2024-07-24 21:38:43.649990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:58.685 [2024-07-24 21:38:43.654945] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:58.685 [2024-07-24 21:38:43.655027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.685 [2024-07-24 21:38:43.655072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:58.685 [2024-07-24 21:38:43.660249] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:58.685 [2024-07-24 21:38:43.660327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.685 [2024-07-24 21:38:43.660351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:58.685 [2024-07-24 21:38:43.665361] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:58.685 [2024-07-24 21:38:43.665444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.685 [2024-07-24 21:38:43.665469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:58.685 [2024-07-24 21:38:43.670337] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:58.685 [2024-07-24 21:38:43.670414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.685 [2024-07-24 21:38:43.670438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:58.685 [2024-07-24 21:38:43.675300] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:58.685 [2024-07-24 21:38:43.675371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.685 [2024-07-24 21:38:43.675395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:58.685 [2024-07-24 21:38:43.680381] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:58.685 [2024-07-24 21:38:43.680476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.685 [2024-07-24 21:38:43.680500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:58.685 [2024-07-24 21:38:43.685697] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:58.685 [2024-07-24 21:38:43.685785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.685 [2024-07-24 21:38:43.685809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:58.948 [2024-07-24 21:38:43.691230] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:58.948 [2024-07-24 21:38:43.691296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.948 [2024-07-24 21:38:43.691320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:58.948 [2024-07-24 21:38:43.696439] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:58.948 [2024-07-24 21:38:43.696534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.948 [2024-07-24 21:38:43.696557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:58.948 [2024-07-24 21:38:43.701724] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:58.948 [2024-07-24 21:38:43.701817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.948 [2024-07-24 21:38:43.701841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:58.948 [2024-07-24 21:38:43.706869] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:58.948 [2024-07-24 21:38:43.706956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.948 [2024-07-24 21:38:43.706978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:58.948 [2024-07-24 21:38:43.712070] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:58.948 [2024-07-24 21:38:43.712147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.948 [2024-07-24 21:38:43.712170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:58.948 [2024-07-24 21:38:43.717095] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:58.948 [2024-07-24 21:38:43.717183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.948 [2024-07-24 21:38:43.717205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:58.948 [2024-07-24 21:38:43.722261] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:58.948 [2024-07-24 21:38:43.722348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.948 [2024-07-24 21:38:43.722371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:58.948 [2024-07-24 21:38:43.727171] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:58.948 [2024-07-24 21:38:43.727234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.948 [2024-07-24 21:38:43.727258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:58.948 [2024-07-24 21:38:43.732275] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:58.948 [2024-07-24 21:38:43.732351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.948 [2024-07-24 21:38:43.732374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:58.948 [2024-07-24 21:38:43.737309] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:58.948 [2024-07-24 21:38:43.737392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.948 [2024-07-24 21:38:43.737414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:58.948 [2024-07-24 21:38:43.742417] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:58.948 [2024-07-24 21:38:43.742505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.948 [2024-07-24 21:38:43.742527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:58.948 [2024-07-24 21:38:43.747639] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:58.948 [2024-07-24 21:38:43.747729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.948 [2024-07-24 21:38:43.747750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:58.948 [2024-07-24 21:38:43.752701] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:58.948 [2024-07-24 21:38:43.752786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.948 [2024-07-24 21:38:43.752808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:58.948 [2024-07-24 21:38:43.758156] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:58.948 [2024-07-24 21:38:43.758275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.948 [2024-07-24 21:38:43.758297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:58.948 [2024-07-24 21:38:43.763847] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:58.948 [2024-07-24 21:38:43.763974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.948 [2024-07-24 21:38:43.763996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:58.948 [2024-07-24 21:38:43.769433] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:58.948 [2024-07-24 21:38:43.769523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.949 [2024-07-24 21:38:43.769545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:58.949 [2024-07-24 21:38:43.775362] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:58.949 [2024-07-24 21:38:43.775484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.949 [2024-07-24 21:38:43.775506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:58.949 [2024-07-24 21:38:43.781339] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:58.949 [2024-07-24 21:38:43.781421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.949 [2024-07-24 21:38:43.781443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:58.949 [2024-07-24 21:38:43.787271] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:58.949 [2024-07-24 21:38:43.787395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.949 [2024-07-24 21:38:43.787430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:58.949 [2024-07-24 21:38:43.793227] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:58.949 [2024-07-24 21:38:43.793341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.949 [2024-07-24 21:38:43.793381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:58.949 [2024-07-24 21:38:43.799135] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:58.949 [2024-07-24 21:38:43.799207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.949 [2024-07-24 21:38:43.799231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:58.949 [2024-07-24 21:38:43.804981] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:58.949 [2024-07-24 21:38:43.805089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.949 [2024-07-24 21:38:43.805113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:58.949 [2024-07-24 21:38:43.810688] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:58.949 [2024-07-24 21:38:43.810826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.949 [2024-07-24 21:38:43.810848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:58.949 [2024-07-24 21:38:43.816408] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:58.949 [2024-07-24 21:38:43.816553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.949 [2024-07-24 21:38:43.816575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:58.949 [2024-07-24 21:38:43.822150] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:58.949 [2024-07-24 21:38:43.822265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.949 [2024-07-24 21:38:43.822288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:58.949 [2024-07-24 21:38:43.827796] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:58.949 [2024-07-24 21:38:43.827910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.949 [2024-07-24 21:38:43.827949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:58.949 [2024-07-24 21:38:43.833333] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:58.949 [2024-07-24 21:38:43.833479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.949 [2024-07-24 21:38:43.833501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:58.949 [2024-07-24 21:38:43.838973] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:58.949 [2024-07-24 21:38:43.839173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.949 [2024-07-24 21:38:43.839197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:58.949 [2024-07-24 21:38:43.844771] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:58.949 [2024-07-24 21:38:43.844873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.949 [2024-07-24 21:38:43.844893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:58.949 [2024-07-24 21:38:43.850526] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:58.949 [2024-07-24 21:38:43.850682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.949 [2024-07-24 21:38:43.850737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:58.949 [2024-07-24 21:38:43.856201] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:58.949 [2024-07-24 21:38:43.856332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.949 [2024-07-24 21:38:43.856356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:58.949 [2024-07-24 21:38:43.861927] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:58.949 [2024-07-24 21:38:43.862135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.949 [2024-07-24 21:38:43.862156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:58.949 [2024-07-24 21:38:43.867739] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:58.949 [2024-07-24 21:38:43.867835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.949 [2024-07-24 21:38:43.867858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:58.949 [2024-07-24 21:38:43.873499] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:58.949 [2024-07-24 21:38:43.873708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.949 [2024-07-24 21:38:43.873731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:58.949 [2024-07-24 21:38:43.879018] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:58.949 [2024-07-24 21:38:43.879170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.949 [2024-07-24 21:38:43.879195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:58.949 [2024-07-24 21:38:43.884856] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:58.949 [2024-07-24 21:38:43.885171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.949 [2024-07-24 21:38:43.885230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:58.949 [2024-07-24 21:38:43.890818] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:58.949 [2024-07-24 21:38:43.891137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.949 [2024-07-24 21:38:43.891166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:58.949 [2024-07-24 21:38:43.896165] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:58.949 [2024-07-24 21:38:43.896366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.949 [2024-07-24 21:38:43.896389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:58.949 [2024-07-24 21:38:43.901682] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:58.949 [2024-07-24 21:38:43.901980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.949 [2024-07-24 21:38:43.902030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:58.949 [2024-07-24 21:38:43.907793] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:58.949 [2024-07-24 21:38:43.908209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.949 [2024-07-24 21:38:43.908236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:58.949 [2024-07-24 21:38:43.913579] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:58.949 [2024-07-24 21:38:43.913912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.949 [2024-07-24 21:38:43.913953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:58.949 [2024-07-24 21:38:43.919825] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:58.949 [2024-07-24 21:38:43.920149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.949 [2024-07-24 21:38:43.920205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:58.949 [2024-07-24 21:38:43.925563] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:58.949 [2024-07-24 21:38:43.925949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.950 [2024-07-24 21:38:43.926024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:58.950 [2024-07-24 21:38:43.931479] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:58.950 [2024-07-24 21:38:43.931866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.950 [2024-07-24 21:38:43.931894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:58.950 [2024-07-24 21:38:43.937247] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:58.950 [2024-07-24 21:38:43.937580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.950 [2024-07-24 21:38:43.937607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:58.950 [2024-07-24 21:38:43.942564] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:58.950 [2024-07-24 21:38:43.942860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.950 [2024-07-24 21:38:43.942887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:59.235 [2024-07-24 21:38:43.948378] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.235 [2024-07-24 21:38:43.948689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.235 [2024-07-24 21:38:43.948730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:59.235 [2024-07-24 21:38:43.953978] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.235 [2024-07-24 21:38:43.954245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.235 [2024-07-24 21:38:43.954272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:59.235 [2024-07-24 21:38:43.959257] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.235 [2024-07-24 21:38:43.959321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.235 [2024-07-24 21:38:43.959387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.235 [2024-07-24 21:38:43.964974] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.235 [2024-07-24 21:38:43.965063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.235 [2024-07-24 21:38:43.965084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:59.235 [2024-07-24 21:38:43.970528] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.235 [2024-07-24 21:38:43.970614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.235 [2024-07-24 21:38:43.970647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:59.235 [2024-07-24 21:38:43.976180] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.235 [2024-07-24 21:38:43.976264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.235 [2024-07-24 21:38:43.976288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:59.235 [2024-07-24 21:38:43.981810] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.235 [2024-07-24 21:38:43.981900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.235 [2024-07-24 21:38:43.981924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.235 [2024-07-24 21:38:43.988107] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.235 [2024-07-24 21:38:43.988217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.235 [2024-07-24 21:38:43.988241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:59.235 [2024-07-24 21:38:43.993735] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.235 [2024-07-24 21:38:43.993814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.235 [2024-07-24 21:38:43.993838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:59.235 [2024-07-24 21:38:43.999174] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.235 [2024-07-24 21:38:43.999236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.235 [2024-07-24 21:38:43.999260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:59.235 [2024-07-24 21:38:44.005502] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.235 [2024-07-24 21:38:44.005599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.235 [2024-07-24 21:38:44.005623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.235 [2024-07-24 21:38:44.011985] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.236 [2024-07-24 21:38:44.012075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.236 [2024-07-24 21:38:44.012098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:59.236 [2024-07-24 21:38:44.017803] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.236 [2024-07-24 21:38:44.017890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.236 [2024-07-24 21:38:44.017913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:59.236 [2024-07-24 21:38:44.023798] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.236 [2024-07-24 21:38:44.023909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.236 [2024-07-24 21:38:44.023957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:59.236 [2024-07-24 21:38:44.029593] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.236 [2024-07-24 21:38:44.029677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.236 [2024-07-24 21:38:44.029701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.236 [2024-07-24 21:38:44.035362] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.236 [2024-07-24 21:38:44.035490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.236 [2024-07-24 21:38:44.035511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:59.236 [2024-07-24 21:38:44.040797] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.236 [2024-07-24 21:38:44.040879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.236 [2024-07-24 21:38:44.040901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:59.236 [2024-07-24 21:38:44.046291] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.236 [2024-07-24 21:38:44.046379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.236 [2024-07-24 21:38:44.046401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:59.236 [2024-07-24 21:38:44.051571] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.236 [2024-07-24 21:38:44.051713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.236 [2024-07-24 21:38:44.051736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.236 [2024-07-24 21:38:44.056287] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.236 [2024-07-24 21:38:44.056344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.236 [2024-07-24 21:38:44.056365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:59.236 [2024-07-24 21:38:44.061172] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.236 [2024-07-24 21:38:44.061298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.236 [2024-07-24 21:38:44.061319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:59.236 [2024-07-24 21:38:44.066451] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.236 [2024-07-24 21:38:44.066555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.236 [2024-07-24 21:38:44.066575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:59.236 [2024-07-24 21:38:44.071777] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.236 [2024-07-24 21:38:44.071865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.236 [2024-07-24 21:38:44.071887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.236 [2024-07-24 21:38:44.077091] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.236 [2024-07-24 21:38:44.077174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.236 [2024-07-24 21:38:44.077196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:59.236 [2024-07-24 21:38:44.082658] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.236 [2024-07-24 21:38:44.082750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.236 [2024-07-24 21:38:44.082772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:59.236 [2024-07-24 21:38:44.088465] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.236 [2024-07-24 21:38:44.088555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.236 [2024-07-24 21:38:44.088585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:59.236 [2024-07-24 21:38:44.094036] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.236 [2024-07-24 21:38:44.094153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.236 [2024-07-24 21:38:44.094174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.236 [2024-07-24 21:38:44.099768] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.236 [2024-07-24 21:38:44.099858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.236 [2024-07-24 21:38:44.099881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:59.236 [2024-07-24 21:38:44.105305] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.236 [2024-07-24 21:38:44.105452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.236 [2024-07-24 21:38:44.105475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:59.236 [2024-07-24 21:38:44.110769] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.236 [2024-07-24 21:38:44.110912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.236 [2024-07-24 21:38:44.110934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:59.236 [2024-07-24 21:38:44.116200] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.236 [2024-07-24 21:38:44.116304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.236 [2024-07-24 21:38:44.116326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.236 [2024-07-24 21:38:44.121671] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.236 [2024-07-24 21:38:44.121764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.236 [2024-07-24 21:38:44.121787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:59.236 [2024-07-24 21:38:44.127162] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.236 [2024-07-24 21:38:44.127249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.236 [2024-07-24 21:38:44.127280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:59.236 [2024-07-24 21:38:44.132831] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.236 [2024-07-24 21:38:44.132907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.236 [2024-07-24 21:38:44.132944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:59.236 [2024-07-24 21:38:44.138359] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.236 [2024-07-24 21:38:44.138463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.236 [2024-07-24 21:38:44.138489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.236 [2024-07-24 21:38:44.143889] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.236 [2024-07-24 21:38:44.143955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.236 [2024-07-24 21:38:44.143980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:59.236 [2024-07-24 21:38:44.149308] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.236 [2024-07-24 21:38:44.149416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.236 [2024-07-24 21:38:44.149443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:59.236 [2024-07-24 21:38:44.154486] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.236 [2024-07-24 21:38:44.154697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.236 [2024-07-24 21:38:44.154724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:59.236 [2024-07-24 21:38:44.159839] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.237 [2024-07-24 21:38:44.159965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.237 [2024-07-24 21:38:44.159989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.237 [2024-07-24 21:38:44.165252] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.237 [2024-07-24 21:38:44.165429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.237 [2024-07-24 21:38:44.165453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:59.237 [2024-07-24 21:38:44.171007] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.237 [2024-07-24 21:38:44.171206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.237 [2024-07-24 21:38:44.171231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:59.237 [2024-07-24 21:38:44.177014] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.237 [2024-07-24 21:38:44.177176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.237 [2024-07-24 21:38:44.177200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:59.237 [2024-07-24 21:38:44.182453] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.237 [2024-07-24 21:38:44.182572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.237 [2024-07-24 21:38:44.182594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.237 [2024-07-24 21:38:44.187795] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.237 [2024-07-24 21:38:44.187993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.237 [2024-07-24 21:38:44.188031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:59.237 [2024-07-24 21:38:44.193350] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.237 [2024-07-24 21:38:44.193466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.237 [2024-07-24 21:38:44.193486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:59.237 [2024-07-24 21:38:44.198615] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.237 [2024-07-24 21:38:44.198748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.237 [2024-07-24 21:38:44.198769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:59.237 [2024-07-24 21:38:44.204059] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.237 [2024-07-24 21:38:44.204146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.237 [2024-07-24 21:38:44.204167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.237 [2024-07-24 21:38:44.209420] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.237 [2024-07-24 21:38:44.209532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.237 [2024-07-24 21:38:44.209554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:59.237 [2024-07-24 21:38:44.214487] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.237 [2024-07-24 21:38:44.214642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.237 [2024-07-24 21:38:44.214675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:59.237 [2024-07-24 21:38:44.219759] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.237 [2024-07-24 21:38:44.219847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.237 [2024-07-24 21:38:44.219868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:59.237 [2024-07-24 21:38:44.225292] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.237 [2024-07-24 21:38:44.225378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.237 [2024-07-24 21:38:44.225399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.505 [2024-07-24 21:38:44.230985] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.505 [2024-07-24 21:38:44.231126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.505 [2024-07-24 21:38:44.231150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:59.505 [2024-07-24 21:38:44.236446] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.505 [2024-07-24 21:38:44.236549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.505 [2024-07-24 21:38:44.236570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:59.505 [2024-07-24 21:38:44.241598] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.505 [2024-07-24 21:38:44.241711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.505 [2024-07-24 21:38:44.241732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:59.505 [2024-07-24 21:38:44.246932] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.505 [2024-07-24 21:38:44.247043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.505 [2024-07-24 21:38:44.247103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.505 [2024-07-24 21:38:44.252209] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.506 [2024-07-24 21:38:44.252305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.506 [2024-07-24 21:38:44.252327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:59.506 [2024-07-24 21:38:44.257570] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.506 [2024-07-24 21:38:44.257782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.506 [2024-07-24 21:38:44.257805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:59.506 [2024-07-24 21:38:44.262588] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.506 [2024-07-24 21:38:44.262766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.506 [2024-07-24 21:38:44.262787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:59.506 [2024-07-24 21:38:44.267932] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.506 [2024-07-24 21:38:44.268130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.506 [2024-07-24 21:38:44.268150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.506 [2024-07-24 21:38:44.273088] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.506 [2024-07-24 21:38:44.273305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.506 [2024-07-24 21:38:44.273326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:59.506 [2024-07-24 21:38:44.278480] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.506 [2024-07-24 21:38:44.278688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.506 [2024-07-24 21:38:44.278711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:59.506 [2024-07-24 21:38:44.283749] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.506 [2024-07-24 21:38:44.284036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.506 [2024-07-24 21:38:44.284068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:59.506 [2024-07-24 21:38:44.289084] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.506 [2024-07-24 21:38:44.289243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.506 [2024-07-24 21:38:44.289266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.506 [2024-07-24 21:38:44.294165] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.506 [2024-07-24 21:38:44.294339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.506 [2024-07-24 21:38:44.294359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:59.506 [2024-07-24 21:38:44.299251] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.506 [2024-07-24 21:38:44.299463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.506 [2024-07-24 21:38:44.299493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:59.506 [2024-07-24 21:38:44.304918] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.506 [2024-07-24 21:38:44.305192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.506 [2024-07-24 21:38:44.305219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:59.506 [2024-07-24 21:38:44.310609] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.506 [2024-07-24 21:38:44.310910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.506 [2024-07-24 21:38:44.310938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.506 [2024-07-24 21:38:44.316615] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.506 [2024-07-24 21:38:44.316927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.506 [2024-07-24 21:38:44.316985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:59.506 [2024-07-24 21:38:44.322377] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.506 [2024-07-24 21:38:44.322635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.506 [2024-07-24 21:38:44.322667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:59.506 [2024-07-24 21:38:44.327990] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.506 [2024-07-24 21:38:44.328263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.506 [2024-07-24 21:38:44.328294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:59.506 [2024-07-24 21:38:44.333747] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.506 [2024-07-24 21:38:44.334021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.506 [2024-07-24 21:38:44.334048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.506 [2024-07-24 21:38:44.338852] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.506 [2024-07-24 21:38:44.339160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.506 [2024-07-24 21:38:44.339188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:59.506 [2024-07-24 21:38:44.343997] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.506 [2024-07-24 21:38:44.344258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.506 [2024-07-24 21:38:44.344285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:59.506 [2024-07-24 21:38:44.349166] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.506 [2024-07-24 21:38:44.349445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.506 [2024-07-24 21:38:44.349472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:59.506 [2024-07-24 21:38:44.354324] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.506 [2024-07-24 21:38:44.354586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.506 [2024-07-24 21:38:44.354618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.506 [2024-07-24 21:38:44.359500] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.506 [2024-07-24 21:38:44.359788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.506 [2024-07-24 21:38:44.359828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:59.506 [2024-07-24 21:38:44.364872] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.506 [2024-07-24 21:38:44.365135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.506 [2024-07-24 21:38:44.365156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:59.506 [2024-07-24 21:38:44.370021] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.506 [2024-07-24 21:38:44.370092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.506 [2024-07-24 21:38:44.370112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:59.506 [2024-07-24 21:38:44.375214] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.506 [2024-07-24 21:38:44.375273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.506 [2024-07-24 21:38:44.375297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.506 [2024-07-24 21:38:44.380832] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.506 [2024-07-24 21:38:44.380913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.506 [2024-07-24 21:38:44.380947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:59.506 [2024-07-24 21:38:44.386401] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.506 [2024-07-24 21:38:44.386458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.506 [2024-07-24 21:38:44.386479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:59.506 [2024-07-24 21:38:44.392569] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.506 [2024-07-24 21:38:44.392696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.506 [2024-07-24 21:38:44.392733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:59.507 [2024-07-24 21:38:44.398894] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.507 [2024-07-24 21:38:44.398961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.507 [2024-07-24 21:38:44.399011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.507 [2024-07-24 21:38:44.404744] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.507 [2024-07-24 21:38:44.404810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.507 [2024-07-24 21:38:44.404833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:59.507 [2024-07-24 21:38:44.410415] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.507 [2024-07-24 21:38:44.410492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.507 [2024-07-24 21:38:44.410512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:59.507 [2024-07-24 21:38:44.416079] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.507 [2024-07-24 21:38:44.416150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.507 [2024-07-24 21:38:44.416171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:59.507 [2024-07-24 21:38:44.421483] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.507 [2024-07-24 21:38:44.421554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.507 [2024-07-24 21:38:44.421575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.507 [2024-07-24 21:38:44.426805] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.507 [2024-07-24 21:38:44.426864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.507 [2024-07-24 21:38:44.426886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:59.507 [2024-07-24 21:38:44.432217] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.507 [2024-07-24 21:38:44.432291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.507 [2024-07-24 21:38:44.432312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:59.507 [2024-07-24 21:38:44.437384] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.507 [2024-07-24 21:38:44.437455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.507 [2024-07-24 21:38:44.437476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:59.507 [2024-07-24 21:38:44.442558] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.507 [2024-07-24 21:38:44.442637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.507 [2024-07-24 21:38:44.442670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.507 [2024-07-24 21:38:44.447866] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.507 [2024-07-24 21:38:44.447938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.507 [2024-07-24 21:38:44.447959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:59.507 [2024-07-24 21:38:44.452981] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.507 [2024-07-24 21:38:44.453052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.507 [2024-07-24 21:38:44.453073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:59.507 [2024-07-24 21:38:44.458091] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.507 [2024-07-24 21:38:44.458177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.507 [2024-07-24 21:38:44.458199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:59.507 [2024-07-24 21:38:44.463565] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.507 [2024-07-24 21:38:44.463668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.507 [2024-07-24 21:38:44.463693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.507 [2024-07-24 21:38:44.469078] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.507 [2024-07-24 21:38:44.469175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.507 [2024-07-24 21:38:44.469198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:59.507 [2024-07-24 21:38:44.474719] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.507 [2024-07-24 21:38:44.474787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.507 [2024-07-24 21:38:44.474811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:59.507 [2024-07-24 21:38:44.480748] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.507 [2024-07-24 21:38:44.480824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.507 [2024-07-24 21:38:44.480848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:59.507 [2024-07-24 21:38:44.486038] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.507 [2024-07-24 21:38:44.486111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.507 [2024-07-24 21:38:44.486132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.507 [2024-07-24 21:38:44.491342] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.507 [2024-07-24 21:38:44.491470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.507 [2024-07-24 21:38:44.491492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:59.507 [2024-07-24 21:38:44.496400] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.507 [2024-07-24 21:38:44.496475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.507 [2024-07-24 21:38:44.496497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:59.507 [2024-07-24 21:38:44.501513] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.507 [2024-07-24 21:38:44.501582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.507 [2024-07-24 21:38:44.501603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:59.767 [2024-07-24 21:38:44.506772] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.767 [2024-07-24 21:38:44.506861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.767 [2024-07-24 21:38:44.506882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.767 [2024-07-24 21:38:44.511993] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.767 [2024-07-24 21:38:44.512087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.767 [2024-07-24 21:38:44.512108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:59.767 [2024-07-24 21:38:44.517089] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.767 [2024-07-24 21:38:44.517160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.767 [2024-07-24 21:38:44.517181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:59.767 [2024-07-24 21:38:44.522224] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.767 [2024-07-24 21:38:44.522296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.767 [2024-07-24 21:38:44.522317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:59.767 [2024-07-24 21:38:44.527320] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.767 [2024-07-24 21:38:44.527424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.767 [2024-07-24 21:38:44.527445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.767 [2024-07-24 21:38:44.532590] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.767 [2024-07-24 21:38:44.532669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.767 [2024-07-24 21:38:44.532707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:59.767 [2024-07-24 21:38:44.538222] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.767 [2024-07-24 21:38:44.538317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.767 [2024-07-24 21:38:44.538356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:59.767 [2024-07-24 21:38:44.543955] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.767 [2024-07-24 21:38:44.544030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.767 [2024-07-24 21:38:44.544051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:59.767 [2024-07-24 21:38:44.549395] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.767 [2024-07-24 21:38:44.549476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.767 [2024-07-24 21:38:44.549497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.767 [2024-07-24 21:38:44.555519] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.767 [2024-07-24 21:38:44.555597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.767 [2024-07-24 21:38:44.555618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:59.767 [2024-07-24 21:38:44.561334] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.767 [2024-07-24 21:38:44.561406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.767 [2024-07-24 21:38:44.561427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:59.767 [2024-07-24 21:38:44.566908] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.767 [2024-07-24 21:38:44.566978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.767 [2024-07-24 21:38:44.567003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:59.767 [2024-07-24 21:38:44.572967] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.767 [2024-07-24 21:38:44.573061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.767 [2024-07-24 21:38:44.573083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.767 [2024-07-24 21:38:44.579259] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.767 [2024-07-24 21:38:44.579369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.767 [2024-07-24 21:38:44.579422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:59.767 [2024-07-24 21:38:44.585357] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.767 [2024-07-24 21:38:44.585428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.767 [2024-07-24 21:38:44.585450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:59.767 [2024-07-24 21:38:44.590931] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.767 [2024-07-24 21:38:44.590991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.767 [2024-07-24 21:38:44.591012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:59.767 [2024-07-24 21:38:44.596247] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.767 [2024-07-24 21:38:44.596331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.767 [2024-07-24 21:38:44.596352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.767 [2024-07-24 21:38:44.601258] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.767 [2024-07-24 21:38:44.601335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.767 [2024-07-24 21:38:44.601356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:59.767 [2024-07-24 21:38:44.606395] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.767 [2024-07-24 21:38:44.606466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.767 [2024-07-24 21:38:44.606487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:59.767 [2024-07-24 21:38:44.611611] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.767 [2024-07-24 21:38:44.611707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.767 [2024-07-24 21:38:44.611755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:59.767 [2024-07-24 21:38:44.616836] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.767 [2024-07-24 21:38:44.616911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.767 [2024-07-24 21:38:44.616948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.767 [2024-07-24 21:38:44.621931] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.767 [2024-07-24 21:38:44.621990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.767 [2024-07-24 21:38:44.622025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:59.767 [2024-07-24 21:38:44.627202] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.767 [2024-07-24 21:38:44.627265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.767 [2024-07-24 21:38:44.627287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:59.768 [2024-07-24 21:38:44.632372] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.768 [2024-07-24 21:38:44.632454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.768 [2024-07-24 21:38:44.632475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:59.768 [2024-07-24 21:38:44.637607] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.768 [2024-07-24 21:38:44.637705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.768 [2024-07-24 21:38:44.637726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.768 [2024-07-24 21:38:44.643254] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.768 [2024-07-24 21:38:44.643323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.768 [2024-07-24 21:38:44.643396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:59.768 [2024-07-24 21:38:44.648491] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.768 [2024-07-24 21:38:44.648570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.768 [2024-07-24 21:38:44.648591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:59.768 [2024-07-24 21:38:44.653510] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.768 [2024-07-24 21:38:44.653589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.768 [2024-07-24 21:38:44.653610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:59.768 [2024-07-24 21:38:44.658558] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.768 [2024-07-24 21:38:44.658654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.768 [2024-07-24 21:38:44.658704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.768 [2024-07-24 21:38:44.663746] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.768 [2024-07-24 21:38:44.663826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.768 [2024-07-24 21:38:44.663847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:59.768 [2024-07-24 21:38:44.668637] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.768 [2024-07-24 21:38:44.668719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.768 [2024-07-24 21:38:44.668739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:59.768 [2024-07-24 21:38:44.673574] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.768 [2024-07-24 21:38:44.673682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.768 [2024-07-24 21:38:44.673703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:59.768 [2024-07-24 21:38:44.678528] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.768 [2024-07-24 21:38:44.678596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.768 [2024-07-24 21:38:44.678617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.768 [2024-07-24 21:38:44.684082] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.768 [2024-07-24 21:38:44.684156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.768 [2024-07-24 21:38:44.684177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:59.768 [2024-07-24 21:38:44.689387] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.768 [2024-07-24 21:38:44.689467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.768 [2024-07-24 21:38:44.689488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:59.768 [2024-07-24 21:38:44.694500] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.768 [2024-07-24 21:38:44.694571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.768 [2024-07-24 21:38:44.694591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:59.768 [2024-07-24 21:38:44.699755] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.768 [2024-07-24 21:38:44.699828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.768 [2024-07-24 21:38:44.699849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.768 [2024-07-24 21:38:44.705200] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.768 [2024-07-24 21:38:44.705273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.768 [2024-07-24 21:38:44.705295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:59.768 [2024-07-24 21:38:44.710562] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.768 [2024-07-24 21:38:44.710649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.768 [2024-07-24 21:38:44.710671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:59.768 [2024-07-24 21:38:44.715787] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.768 [2024-07-24 21:38:44.715856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.768 [2024-07-24 21:38:44.715877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:59.768 [2024-07-24 21:38:44.720921] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.768 [2024-07-24 21:38:44.720996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.768 [2024-07-24 21:38:44.721032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.768 [2024-07-24 21:38:44.726169] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.768 [2024-07-24 21:38:44.726249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.768 [2024-07-24 21:38:44.726270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:59.768 [2024-07-24 21:38:44.731353] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.768 [2024-07-24 21:38:44.731465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.768 [2024-07-24 21:38:44.731487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:59.768 [2024-07-24 21:38:44.736316] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.768 [2024-07-24 21:38:44.736388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.768 [2024-07-24 21:38:44.736409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:59.768 [2024-07-24 21:38:44.741350] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.768 [2024-07-24 21:38:44.741426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.768 [2024-07-24 21:38:44.741447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.768 [2024-07-24 21:38:44.746293] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.768 [2024-07-24 21:38:44.746367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.768 [2024-07-24 21:38:44.746389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:59.768 [2024-07-24 21:38:44.751517] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.768 [2024-07-24 21:38:44.751592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.768 [2024-07-24 21:38:44.751612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:59.768 [2024-07-24 21:38:44.756617] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.768 [2024-07-24 21:38:44.756717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.768 [2024-07-24 21:38:44.756750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:59.768 [2024-07-24 21:38:44.761677] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.768 [2024-07-24 21:38:44.761759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.769 [2024-07-24 21:38:44.761779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.769 [2024-07-24 21:38:44.767186] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:16:59.769 [2024-07-24 21:38:44.767256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.769 [2024-07-24 21:38:44.767279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.029 [2024-07-24 21:38:44.772616] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.029 [2024-07-24 21:38:44.772716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.029 [2024-07-24 21:38:44.772737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.029 [2024-07-24 21:38:44.777668] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.029 [2024-07-24 21:38:44.777756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.029 [2024-07-24 21:38:44.777776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.029 [2024-07-24 21:38:44.782831] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.029 [2024-07-24 21:38:44.782904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.029 [2024-07-24 21:38:44.782924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.029 [2024-07-24 21:38:44.788051] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.029 [2024-07-24 21:38:44.788142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.029 [2024-07-24 21:38:44.788166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.029 [2024-07-24 21:38:44.793367] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.029 [2024-07-24 21:38:44.793499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.029 [2024-07-24 21:38:44.793520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.029 [2024-07-24 21:38:44.798502] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.029 [2024-07-24 21:38:44.798584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.029 [2024-07-24 21:38:44.798605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.029 [2024-07-24 21:38:44.803896] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.029 [2024-07-24 21:38:44.803968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.029 [2024-07-24 21:38:44.803989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.029 [2024-07-24 21:38:44.809226] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.029 [2024-07-24 21:38:44.809301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.029 [2024-07-24 21:38:44.809322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.029 [2024-07-24 21:38:44.814470] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.029 [2024-07-24 21:38:44.814560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.029 [2024-07-24 21:38:44.814581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.029 [2024-07-24 21:38:44.819741] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.029 [2024-07-24 21:38:44.819828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.029 [2024-07-24 21:38:44.819851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.029 [2024-07-24 21:38:44.824870] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.029 [2024-07-24 21:38:44.824940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.029 [2024-07-24 21:38:44.824961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.029 [2024-07-24 21:38:44.830123] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.029 [2024-07-24 21:38:44.830214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.029 [2024-07-24 21:38:44.830237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.029 [2024-07-24 21:38:44.835416] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.029 [2024-07-24 21:38:44.835499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.029 [2024-07-24 21:38:44.835520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.029 [2024-07-24 21:38:44.840518] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.029 [2024-07-24 21:38:44.840601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.029 [2024-07-24 21:38:44.840632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.029 [2024-07-24 21:38:44.845555] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.029 [2024-07-24 21:38:44.845657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.029 [2024-07-24 21:38:44.845680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.029 [2024-07-24 21:38:44.850829] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.029 [2024-07-24 21:38:44.850913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.029 [2024-07-24 21:38:44.850935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.029 [2024-07-24 21:38:44.855992] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.029 [2024-07-24 21:38:44.856078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.029 [2024-07-24 21:38:44.856098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.029 [2024-07-24 21:38:44.860925] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.029 [2024-07-24 21:38:44.861139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.029 [2024-07-24 21:38:44.861159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.029 [2024-07-24 21:38:44.866287] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.029 [2024-07-24 21:38:44.866392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.029 [2024-07-24 21:38:44.866429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.029 [2024-07-24 21:38:44.871633] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.029 [2024-07-24 21:38:44.871741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.029 [2024-07-24 21:38:44.871762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.029 [2024-07-24 21:38:44.876847] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.029 [2024-07-24 21:38:44.876933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.029 [2024-07-24 21:38:44.876954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.029 [2024-07-24 21:38:44.882003] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.029 [2024-07-24 21:38:44.882124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.029 [2024-07-24 21:38:44.882146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.029 [2024-07-24 21:38:44.887237] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.029 [2024-07-24 21:38:44.887302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.029 [2024-07-24 21:38:44.887323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.029 [2024-07-24 21:38:44.892288] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.029 [2024-07-24 21:38:44.892398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.030 [2024-07-24 21:38:44.892420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.030 [2024-07-24 21:38:44.897230] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.030 [2024-07-24 21:38:44.897314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.030 [2024-07-24 21:38:44.897350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.030 [2024-07-24 21:38:44.902177] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.030 [2024-07-24 21:38:44.902271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.030 [2024-07-24 21:38:44.902292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.030 [2024-07-24 21:38:44.907457] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.030 [2024-07-24 21:38:44.907567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.030 [2024-07-24 21:38:44.907588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.030 [2024-07-24 21:38:44.912820] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.030 [2024-07-24 21:38:44.912919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.030 [2024-07-24 21:38:44.912944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.030 [2024-07-24 21:38:44.918139] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.030 [2024-07-24 21:38:44.918283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.030 [2024-07-24 21:38:44.918306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.030 [2024-07-24 21:38:44.923464] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.030 [2024-07-24 21:38:44.923580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.030 [2024-07-24 21:38:44.923601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.030 [2024-07-24 21:38:44.928651] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.030 [2024-07-24 21:38:44.928756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.030 [2024-07-24 21:38:44.928777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.030 [2024-07-24 21:38:44.933950] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.030 [2024-07-24 21:38:44.934040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.030 [2024-07-24 21:38:44.934090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.030 [2024-07-24 21:38:44.939636] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.030 [2024-07-24 21:38:44.939792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.030 [2024-07-24 21:38:44.939813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.030 [2024-07-24 21:38:44.945369] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.030 [2024-07-24 21:38:44.945472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.030 [2024-07-24 21:38:44.945495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.030 [2024-07-24 21:38:44.952037] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.030 [2024-07-24 21:38:44.952125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.030 [2024-07-24 21:38:44.952149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.030 [2024-07-24 21:38:44.957931] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.030 [2024-07-24 21:38:44.958132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.030 [2024-07-24 21:38:44.958169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.030 [2024-07-24 21:38:44.963861] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.030 [2024-07-24 21:38:44.964001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.030 [2024-07-24 21:38:44.964025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.030 [2024-07-24 21:38:44.969850] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.030 [2024-07-24 21:38:44.969922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.030 [2024-07-24 21:38:44.969959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.030 [2024-07-24 21:38:44.975906] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.030 [2024-07-24 21:38:44.976031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.030 [2024-07-24 21:38:44.976054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.030 [2024-07-24 21:38:44.981625] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.030 [2024-07-24 21:38:44.981792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.030 [2024-07-24 21:38:44.981815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.030 [2024-07-24 21:38:44.987178] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.030 [2024-07-24 21:38:44.987274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.030 [2024-07-24 21:38:44.987296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.030 [2024-07-24 21:38:44.993152] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.030 [2024-07-24 21:38:44.993252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.030 [2024-07-24 21:38:44.993275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.030 [2024-07-24 21:38:44.998622] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.030 [2024-07-24 21:38:44.998715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.030 [2024-07-24 21:38:44.998749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.030 [2024-07-24 21:38:45.003843] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.030 [2024-07-24 21:38:45.003918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.030 [2024-07-24 21:38:45.003940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.030 [2024-07-24 21:38:45.009178] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.030 [2024-07-24 21:38:45.009312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.030 [2024-07-24 21:38:45.009349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.030 [2024-07-24 21:38:45.014218] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.030 [2024-07-24 21:38:45.014441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.030 [2024-07-24 21:38:45.014462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.030 [2024-07-24 21:38:45.019903] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.030 [2024-07-24 21:38:45.020103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.030 [2024-07-24 21:38:45.020126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.030 [2024-07-24 21:38:45.025099] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.030 [2024-07-24 21:38:45.025251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.030 [2024-07-24 21:38:45.025276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.291 [2024-07-24 21:38:45.030943] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.291 [2024-07-24 21:38:45.031097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.291 [2024-07-24 21:38:45.031120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.291 [2024-07-24 21:38:45.037014] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.291 [2024-07-24 21:38:45.037156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.291 [2024-07-24 21:38:45.037179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.291 [2024-07-24 21:38:45.042468] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.291 [2024-07-24 21:38:45.042704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.291 [2024-07-24 21:38:45.042726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.291 [2024-07-24 21:38:45.047880] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.291 [2024-07-24 21:38:45.047977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.291 [2024-07-24 21:38:45.047998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.291 [2024-07-24 21:38:45.053221] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.291 [2024-07-24 21:38:45.053546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.291 [2024-07-24 21:38:45.053573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.291 [2024-07-24 21:38:45.058763] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.291 [2024-07-24 21:38:45.059075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.291 [2024-07-24 21:38:45.059109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.291 [2024-07-24 21:38:45.064568] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.291 [2024-07-24 21:38:45.064832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.291 [2024-07-24 21:38:45.064853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.291 [2024-07-24 21:38:45.069602] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.291 [2024-07-24 21:38:45.069705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.291 [2024-07-24 21:38:45.069727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.291 [2024-07-24 21:38:45.075187] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.291 [2024-07-24 21:38:45.075248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.291 [2024-07-24 21:38:45.075273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.291 [2024-07-24 21:38:45.080692] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.291 [2024-07-24 21:38:45.080765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.291 [2024-07-24 21:38:45.080786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.291 [2024-07-24 21:38:45.086042] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.291 [2024-07-24 21:38:45.086143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.291 [2024-07-24 21:38:45.086166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.291 [2024-07-24 21:38:45.091443] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.291 [2024-07-24 21:38:45.091539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.291 [2024-07-24 21:38:45.091560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.291 [2024-07-24 21:38:45.096928] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.291 [2024-07-24 21:38:45.097023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.291 [2024-07-24 21:38:45.097063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.291 [2024-07-24 21:38:45.102598] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.291 [2024-07-24 21:38:45.102682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.291 [2024-07-24 21:38:45.102703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.291 [2024-07-24 21:38:45.107890] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.291 [2024-07-24 21:38:45.107978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.291 [2024-07-24 21:38:45.107998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.291 [2024-07-24 21:38:45.113324] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.291 [2024-07-24 21:38:45.113415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.291 [2024-07-24 21:38:45.113451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.291 [2024-07-24 21:38:45.118820] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.291 [2024-07-24 21:38:45.118881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.291 [2024-07-24 21:38:45.118900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.291 [2024-07-24 21:38:45.124414] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.291 [2024-07-24 21:38:45.124506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.291 [2024-07-24 21:38:45.124528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.291 [2024-07-24 21:38:45.129968] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.291 [2024-07-24 21:38:45.130060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.291 [2024-07-24 21:38:45.130084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.291 [2024-07-24 21:38:45.135362] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.291 [2024-07-24 21:38:45.135472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.291 [2024-07-24 21:38:45.135494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.291 [2024-07-24 21:38:45.140852] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.291 [2024-07-24 21:38:45.140923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.291 [2024-07-24 21:38:45.140943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.291 [2024-07-24 21:38:45.146092] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.291 [2024-07-24 21:38:45.146167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.291 [2024-07-24 21:38:45.146191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.291 [2024-07-24 21:38:45.151006] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.291 [2024-07-24 21:38:45.151135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.291 [2024-07-24 21:38:45.151158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.292 [2024-07-24 21:38:45.156400] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.292 [2024-07-24 21:38:45.156494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.292 [2024-07-24 21:38:45.156514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.292 [2024-07-24 21:38:45.161503] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.292 [2024-07-24 21:38:45.161571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.292 [2024-07-24 21:38:45.161591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.292 [2024-07-24 21:38:45.166459] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.292 [2024-07-24 21:38:45.166539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.292 [2024-07-24 21:38:45.166560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.292 [2024-07-24 21:38:45.171629] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.292 [2024-07-24 21:38:45.171714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.292 [2024-07-24 21:38:45.171747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.292 [2024-07-24 21:38:45.176509] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.292 [2024-07-24 21:38:45.176580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.292 [2024-07-24 21:38:45.176602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.292 [2024-07-24 21:38:45.181643] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.292 [2024-07-24 21:38:45.181718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.292 [2024-07-24 21:38:45.181739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.292 [2024-07-24 21:38:45.186655] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.292 [2024-07-24 21:38:45.186725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.292 [2024-07-24 21:38:45.186745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.292 [2024-07-24 21:38:45.191566] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.292 [2024-07-24 21:38:45.191638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.292 [2024-07-24 21:38:45.191659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.292 [2024-07-24 21:38:45.196419] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.292 [2024-07-24 21:38:45.196495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.292 [2024-07-24 21:38:45.196515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.292 [2024-07-24 21:38:45.201334] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.292 [2024-07-24 21:38:45.201404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.292 [2024-07-24 21:38:45.201425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.292 [2024-07-24 21:38:45.206215] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.292 [2024-07-24 21:38:45.206296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.292 [2024-07-24 21:38:45.206317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.292 [2024-07-24 21:38:45.211226] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.292 [2024-07-24 21:38:45.211285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.292 [2024-07-24 21:38:45.211305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.292 [2024-07-24 21:38:45.216100] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.292 [2024-07-24 21:38:45.216168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.292 [2024-07-24 21:38:45.216189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.292 [2024-07-24 21:38:45.220993] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.292 [2024-07-24 21:38:45.221061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.292 [2024-07-24 21:38:45.221081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.292 [2024-07-24 21:38:45.225888] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.292 [2024-07-24 21:38:45.225963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.292 [2024-07-24 21:38:45.225984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.292 [2024-07-24 21:38:45.231003] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.292 [2024-07-24 21:38:45.231124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.292 [2024-07-24 21:38:45.231147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.292 [2024-07-24 21:38:45.236431] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.292 [2024-07-24 21:38:45.236503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.292 [2024-07-24 21:38:45.236524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.292 [2024-07-24 21:38:45.241476] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.292 [2024-07-24 21:38:45.241547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.292 [2024-07-24 21:38:45.241567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.292 [2024-07-24 21:38:45.246605] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.292 [2024-07-24 21:38:45.246698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.292 [2024-07-24 21:38:45.246718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.292 [2024-07-24 21:38:45.251894] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.292 [2024-07-24 21:38:45.251967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.292 [2024-07-24 21:38:45.251999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.292 [2024-07-24 21:38:45.257053] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.292 [2024-07-24 21:38:45.257133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.292 [2024-07-24 21:38:45.257154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.292 [2024-07-24 21:38:45.262189] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.292 [2024-07-24 21:38:45.262262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.292 [2024-07-24 21:38:45.262282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.292 [2024-07-24 21:38:45.267077] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.292 [2024-07-24 21:38:45.267138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.292 [2024-07-24 21:38:45.267158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.292 [2024-07-24 21:38:45.271987] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.292 [2024-07-24 21:38:45.272055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.292 [2024-07-24 21:38:45.272076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.292 [2024-07-24 21:38:45.276809] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.292 [2024-07-24 21:38:45.276892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.292 [2024-07-24 21:38:45.276912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.293 [2024-07-24 21:38:45.281720] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.293 [2024-07-24 21:38:45.281791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.293 [2024-07-24 21:38:45.281812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.293 [2024-07-24 21:38:45.286539] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.293 [2024-07-24 21:38:45.286617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.293 [2024-07-24 21:38:45.286652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.553 [2024-07-24 21:38:45.291668] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.553 [2024-07-24 21:38:45.291738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.553 [2024-07-24 21:38:45.291759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.553 [2024-07-24 21:38:45.296714] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.553 [2024-07-24 21:38:45.296782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.553 [2024-07-24 21:38:45.296815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.553 [2024-07-24 21:38:45.301840] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.553 [2024-07-24 21:38:45.301913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.553 [2024-07-24 21:38:45.301937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.553 [2024-07-24 21:38:45.307133] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.553 [2024-07-24 21:38:45.307207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.553 [2024-07-24 21:38:45.307230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.553 [2024-07-24 21:38:45.312569] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.553 [2024-07-24 21:38:45.312638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.553 [2024-07-24 21:38:45.312671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.553 [2024-07-24 21:38:45.317823] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.553 [2024-07-24 21:38:45.317897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.553 [2024-07-24 21:38:45.317918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.553 [2024-07-24 21:38:45.323608] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.553 [2024-07-24 21:38:45.323717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.553 [2024-07-24 21:38:45.323738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.553 [2024-07-24 21:38:45.329399] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.553 [2024-07-24 21:38:45.329471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.553 [2024-07-24 21:38:45.329492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.553 [2024-07-24 21:38:45.335323] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.553 [2024-07-24 21:38:45.335448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.553 [2024-07-24 21:38:45.335469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.553 [2024-07-24 21:38:45.341349] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.553 [2024-07-24 21:38:45.341433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.553 [2024-07-24 21:38:45.341453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.553 [2024-07-24 21:38:45.347520] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.553 [2024-07-24 21:38:45.347604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.553 [2024-07-24 21:38:45.347627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.553 [2024-07-24 21:38:45.353124] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.553 [2024-07-24 21:38:45.353225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.553 [2024-07-24 21:38:45.353245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.553 [2024-07-24 21:38:45.358358] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.553 [2024-07-24 21:38:45.358436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.553 [2024-07-24 21:38:45.358457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.553 [2024-07-24 21:38:45.363780] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.553 [2024-07-24 21:38:45.363867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.553 [2024-07-24 21:38:45.363890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.553 [2024-07-24 21:38:45.369140] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.553 [2024-07-24 21:38:45.369212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.553 [2024-07-24 21:38:45.369232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.553 [2024-07-24 21:38:45.374283] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.553 [2024-07-24 21:38:45.374351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.553 [2024-07-24 21:38:45.374372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.553 [2024-07-24 21:38:45.379375] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.553 [2024-07-24 21:38:45.379458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.553 [2024-07-24 21:38:45.379479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.553 [2024-07-24 21:38:45.384482] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.553 [2024-07-24 21:38:45.384553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.553 [2024-07-24 21:38:45.384574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.553 [2024-07-24 21:38:45.389311] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.553 [2024-07-24 21:38:45.389386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.553 [2024-07-24 21:38:45.389407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.553 [2024-07-24 21:38:45.394774] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.553 [2024-07-24 21:38:45.394846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.553 [2024-07-24 21:38:45.394868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.553 [2024-07-24 21:38:45.399819] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.553 [2024-07-24 21:38:45.399902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.553 [2024-07-24 21:38:45.399926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.553 [2024-07-24 21:38:45.405221] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.554 [2024-07-24 21:38:45.405328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.554 [2024-07-24 21:38:45.405350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.554 [2024-07-24 21:38:45.410760] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.554 [2024-07-24 21:38:45.410859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.554 [2024-07-24 21:38:45.410882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.554 [2024-07-24 21:38:45.416862] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.554 [2024-07-24 21:38:45.416940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.554 [2024-07-24 21:38:45.416963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.554 [2024-07-24 21:38:45.422894] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.554 [2024-07-24 21:38:45.422969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.554 [2024-07-24 21:38:45.422992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.554 [2024-07-24 21:38:45.428779] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.554 [2024-07-24 21:38:45.428857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.554 [2024-07-24 21:38:45.428880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.554 [2024-07-24 21:38:45.434912] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.554 [2024-07-24 21:38:45.435088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.554 [2024-07-24 21:38:45.435123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.554 [2024-07-24 21:38:45.440623] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.554 [2024-07-24 21:38:45.440758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.554 [2024-07-24 21:38:45.440779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.554 [2024-07-24 21:38:45.445931] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.554 [2024-07-24 21:38:45.446031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.554 [2024-07-24 21:38:45.446051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.554 [2024-07-24 21:38:45.451160] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.554 [2024-07-24 21:38:45.451261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.554 [2024-07-24 21:38:45.451283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.554 [2024-07-24 21:38:45.456148] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.554 [2024-07-24 21:38:45.456222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.554 [2024-07-24 21:38:45.456243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.554 [2024-07-24 21:38:45.461131] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.554 [2024-07-24 21:38:45.461207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.554 [2024-07-24 21:38:45.461228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.554 [2024-07-24 21:38:45.466096] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.554 [2024-07-24 21:38:45.466187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.554 [2024-07-24 21:38:45.466208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.554 [2024-07-24 21:38:45.471158] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.554 [2024-07-24 21:38:45.471415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.554 [2024-07-24 21:38:45.471438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.554 [2024-07-24 21:38:45.476441] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.554 [2024-07-24 21:38:45.476524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.554 [2024-07-24 21:38:45.476545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.554 [2024-07-24 21:38:45.481531] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.554 [2024-07-24 21:38:45.481600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.554 [2024-07-24 21:38:45.481622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.554 [2024-07-24 21:38:45.486667] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.554 [2024-07-24 21:38:45.486802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.554 [2024-07-24 21:38:45.486824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.554 [2024-07-24 21:38:45.491955] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.554 [2024-07-24 21:38:45.492112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.554 [2024-07-24 21:38:45.492133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.554 [2024-07-24 21:38:45.497423] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.554 [2024-07-24 21:38:45.497513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.554 [2024-07-24 21:38:45.497534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.554 [2024-07-24 21:38:45.502998] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.554 [2024-07-24 21:38:45.503116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.554 [2024-07-24 21:38:45.503139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.554 [2024-07-24 21:38:45.508751] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.554 [2024-07-24 21:38:45.508847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.554 [2024-07-24 21:38:45.508885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.554 [2024-07-24 21:38:45.513946] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.554 [2024-07-24 21:38:45.514095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.554 [2024-07-24 21:38:45.514115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.554 [2024-07-24 21:38:45.519321] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.554 [2024-07-24 21:38:45.519478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.554 [2024-07-24 21:38:45.519498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.554 [2024-07-24 21:38:45.524755] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.554 [2024-07-24 21:38:45.524863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.554 [2024-07-24 21:38:45.524883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.554 [2024-07-24 21:38:45.530039] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.554 [2024-07-24 21:38:45.530148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.554 [2024-07-24 21:38:45.530168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.554 [2024-07-24 21:38:45.534984] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.554 [2024-07-24 21:38:45.535082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.554 [2024-07-24 21:38:45.535103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.554 [2024-07-24 21:38:45.540023] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.554 [2024-07-24 21:38:45.540129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.554 [2024-07-24 21:38:45.540149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.554 [2024-07-24 21:38:45.544925] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.554 [2024-07-24 21:38:45.545006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.554 [2024-07-24 21:38:45.545026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.555 [2024-07-24 21:38:45.549942] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.555 [2024-07-24 21:38:45.550032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.555 [2024-07-24 21:38:45.550052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.814 [2024-07-24 21:38:45.555315] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.814 [2024-07-24 21:38:45.555441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.814 [2024-07-24 21:38:45.555473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.814 [2024-07-24 21:38:45.560631] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.814 [2024-07-24 21:38:45.560720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.814 [2024-07-24 21:38:45.560740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.814 [2024-07-24 21:38:45.565400] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.814 [2024-07-24 21:38:45.565480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.814 [2024-07-24 21:38:45.565500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.814 [2024-07-24 21:38:45.570323] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.814 [2024-07-24 21:38:45.570396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.814 [2024-07-24 21:38:45.570416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.814 [2024-07-24 21:38:45.575322] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.814 [2024-07-24 21:38:45.575445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.814 [2024-07-24 21:38:45.575465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.814 [2024-07-24 21:38:45.580774] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.814 [2024-07-24 21:38:45.580864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.814 [2024-07-24 21:38:45.580884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.814 [2024-07-24 21:38:45.586316] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.814 [2024-07-24 21:38:45.586404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.814 [2024-07-24 21:38:45.586426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.814 [2024-07-24 21:38:45.592146] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.814 [2024-07-24 21:38:45.592271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.814 [2024-07-24 21:38:45.592291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.814 [2024-07-24 21:38:45.598174] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.814 [2024-07-24 21:38:45.598305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.814 [2024-07-24 21:38:45.598341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.814 [2024-07-24 21:38:45.604078] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x168e080) with pdu=0x2000190fef90 00:17:00.814 [2024-07-24 21:38:45.604176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.814 [2024-07-24 21:38:45.604205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.814 00:17:00.814 Latency(us) 00:17:00.814 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:00.814 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:17:00.814 nvme0n1 : 2.00 5740.50 717.56 0.00 0.00 2781.19 1891.61 6434.44 00:17:00.814 =================================================================================================================== 00:17:00.814 Total : 5740.50 717.56 0.00 0.00 2781.19 1891.61 6434.44 00:17:00.814 0 00:17:00.814 21:38:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:17:00.814 21:38:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:17:00.814 21:38:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:17:00.814 | .driver_specific 00:17:00.814 | .nvme_error 00:17:00.814 | .status_code 00:17:00.814 | .command_transient_transport_error' 00:17:00.814 21:38:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:17:01.073 21:38:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 370 > 0 )) 00:17:01.073 21:38:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 79622 00:17:01.073 21:38:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 79622 ']' 00:17:01.073 21:38:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 79622 00:17:01.073 21:38:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:17:01.073 21:38:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:01.073 21:38:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79622 00:17:01.073 21:38:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:01.073 killing process with pid 79622 00:17:01.073 21:38:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:01.073 21:38:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79622' 00:17:01.073 Received shutdown signal, test time was about 2.000000 seconds 00:17:01.073 00:17:01.073 Latency(us) 00:17:01.073 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:01.073 =================================================================================================================== 00:17:01.073 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:01.073 21:38:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 79622 00:17:01.073 21:38:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 79622 00:17:01.332 21:38:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 79415 00:17:01.332 21:38:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 79415 ']' 00:17:01.332 21:38:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 79415 00:17:01.332 21:38:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:17:01.332 21:38:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:01.332 21:38:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79415 00:17:01.332 21:38:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:01.332 killing process with pid 79415 00:17:01.332 21:38:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:01.332 21:38:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79415' 00:17:01.332 21:38:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 79415 00:17:01.332 21:38:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 79415 00:17:01.590 00:17:01.590 real 0m18.421s 00:17:01.590 user 0m34.106s 00:17:01.590 sys 0m5.810s 00:17:01.590 21:38:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:01.590 21:38:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:01.590 ************************************ 00:17:01.590 END TEST nvmf_digest_error 00:17:01.590 ************************************ 00:17:01.590 21:38:46 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:17:01.590 21:38:46 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:17:01.590 21:38:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:01.590 21:38:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:17:01.590 21:38:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:01.590 21:38:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:17:01.590 21:38:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:01.590 21:38:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:01.590 rmmod nvme_tcp 00:17:01.850 rmmod nvme_fabrics 00:17:01.850 rmmod nvme_keyring 00:17:01.850 21:38:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:01.850 21:38:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:17:01.850 21:38:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:17:01.850 21:38:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 79415 ']' 00:17:01.850 21:38:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 79415 00:17:01.850 21:38:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@950 -- # '[' -z 79415 ']' 00:17:01.850 21:38:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # kill -0 79415 00:17:01.850 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (79415) - No such process 00:17:01.850 Process with pid 79415 is not found 00:17:01.850 21:38:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@977 -- # echo 'Process with pid 79415 is not found' 00:17:01.850 21:38:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:01.850 21:38:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:01.850 21:38:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:01.850 21:38:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:01.850 21:38:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:01.850 21:38:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:01.850 21:38:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:01.850 21:38:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:01.850 21:38:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:01.850 00:17:01.850 real 0m38.363s 00:17:01.850 user 1m10.428s 00:17:01.850 sys 0m11.778s 00:17:01.850 21:38:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:01.850 ************************************ 00:17:01.850 END TEST nvmf_digest 00:17:01.850 ************************************ 00:17:01.850 21:38:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:17:01.850 21:38:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:17:01.850 21:38:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 1 -eq 1 ]] 00:17:01.850 21:38:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@42 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:17:01.850 21:38:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:01.850 21:38:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:01.850 21:38:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.850 ************************************ 00:17:01.850 START TEST nvmf_host_multipath 00:17:01.850 ************************************ 00:17:01.850 21:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:17:01.850 * Looking for test storage... 00:17:01.850 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:01.850 21:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:01.850 21:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:17:01.850 21:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:01.850 21:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:01.850 21:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:01.850 21:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:01.850 21:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:01.850 21:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:01.850 21:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:01.850 21:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:01.850 21:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:01.850 21:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:01.850 21:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:17:01.850 21:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:17:01.850 21:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:01.850 21:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:01.850 21:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:01.850 21:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:01.850 21:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:01.850 21:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:01.850 21:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:01.850 21:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:01.850 21:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.850 21:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.850 21:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.850 21:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:17:01.850 21:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.850 21:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@47 -- # : 0 00:17:01.850 21:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:01.850 21:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:01.850 21:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:01.850 21:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:01.850 21:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:01.850 21:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:01.850 21:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:01.850 21:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:01.850 21:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:01.850 21:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:01.850 21:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:01.850 21:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:17:01.850 21:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:01.850 21:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:17:01.850 21:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:17:01.850 21:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:01.850 21:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:01.850 21:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:01.850 21:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:01.850 21:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:01.850 21:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:01.850 21:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:01.850 21:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:01.850 21:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:01.850 21:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:01.850 21:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:01.850 21:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:01.850 21:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:01.850 21:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:01.850 21:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:01.851 21:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:01.851 21:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:01.851 21:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:01.851 21:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:01.851 21:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:01.851 21:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:01.851 21:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:01.851 21:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:01.851 21:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:01.851 21:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:01.851 21:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:01.851 21:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:02.109 21:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:02.109 Cannot find device "nvmf_tgt_br" 00:17:02.109 21:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@155 -- # true 00:17:02.109 21:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:02.109 Cannot find device "nvmf_tgt_br2" 00:17:02.109 21:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@156 -- # true 00:17:02.109 21:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:02.109 21:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:02.109 Cannot find device "nvmf_tgt_br" 00:17:02.109 21:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@158 -- # true 00:17:02.109 21:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:02.109 Cannot find device "nvmf_tgt_br2" 00:17:02.109 21:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@159 -- # true 00:17:02.109 21:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:02.109 21:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:02.109 21:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:02.109 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:02.109 21:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:17:02.109 21:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:02.109 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:02.109 21:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:17:02.109 21:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:02.109 21:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:02.109 21:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:02.109 21:38:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:02.109 21:38:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:02.109 21:38:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:02.109 21:38:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:02.109 21:38:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:02.110 21:38:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:02.110 21:38:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:02.110 21:38:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:02.110 21:38:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:02.110 21:38:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:02.110 21:38:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:02.110 21:38:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:02.110 21:38:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:02.379 21:38:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:02.379 21:38:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:02.379 21:38:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:02.379 21:38:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:02.379 21:38:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:02.379 21:38:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:02.379 21:38:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:02.379 21:38:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:02.379 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:02.379 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:17:02.379 00:17:02.379 --- 10.0.0.2 ping statistics --- 00:17:02.379 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:02.379 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:17:02.379 21:38:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:02.379 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:02.379 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:17:02.379 00:17:02.379 --- 10.0.0.3 ping statistics --- 00:17:02.379 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:02.379 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:17:02.379 21:38:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:02.379 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:02.379 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:17:02.379 00:17:02.379 --- 10.0.0.1 ping statistics --- 00:17:02.379 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:02.379 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:17:02.379 21:38:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:02.379 21:38:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@433 -- # return 0 00:17:02.379 21:38:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:02.379 21:38:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:02.379 21:38:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:02.379 21:38:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:02.379 21:38:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:02.379 21:38:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:02.379 21:38:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:02.379 21:38:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:17:02.379 21:38:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:02.379 21:38:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:02.379 21:38:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:02.379 21:38:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@481 -- # nvmfpid=79881 00:17:02.379 21:38:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:17:02.379 21:38:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@482 -- # waitforlisten 79881 00:17:02.379 21:38:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@831 -- # '[' -z 79881 ']' 00:17:02.379 21:38:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:02.379 21:38:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:02.379 21:38:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:02.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:02.379 21:38:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:02.379 21:38:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:02.379 [2024-07-24 21:38:47.277650] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:17:02.379 [2024-07-24 21:38:47.277745] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:02.639 [2024-07-24 21:38:47.412734] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:02.639 [2024-07-24 21:38:47.520938] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:02.639 [2024-07-24 21:38:47.521001] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:02.639 [2024-07-24 21:38:47.521013] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:02.639 [2024-07-24 21:38:47.521048] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:02.639 [2024-07-24 21:38:47.521056] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:02.639 [2024-07-24 21:38:47.521246] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:02.639 [2024-07-24 21:38:47.521260] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:02.639 [2024-07-24 21:38:47.575800] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:03.575 21:38:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:03.575 21:38:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # return 0 00:17:03.575 21:38:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:03.575 21:38:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:03.575 21:38:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:03.575 21:38:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:03.575 21:38:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=79881 00:17:03.575 21:38:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:03.575 [2024-07-24 21:38:48.501096] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:03.575 21:38:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:17:03.834 Malloc0 00:17:03.834 21:38:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:17:04.093 21:38:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:04.352 21:38:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:04.611 [2024-07-24 21:38:49.461331] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:04.611 21:38:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:04.870 [2024-07-24 21:38:49.673616] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:17:04.870 21:38:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=79937 00:17:04.870 21:38:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:17:04.870 21:38:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:04.870 21:38:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 79937 /var/tmp/bdevperf.sock 00:17:04.871 21:38:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@831 -- # '[' -z 79937 ']' 00:17:04.871 21:38:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:04.871 21:38:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:04.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:04.871 21:38:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:04.871 21:38:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:04.871 21:38:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:05.807 21:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:05.807 21:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # return 0 00:17:05.807 21:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:17:06.066 21:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:17:06.325 Nvme0n1 00:17:06.325 21:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:17:06.583 Nvme0n1 00:17:06.583 21:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:17:06.583 21:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:17:07.957 21:38:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:17:07.957 21:38:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:17:07.957 21:38:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:17:08.215 21:38:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:17:08.215 21:38:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=79982 00:17:08.215 21:38:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 79881 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:08.215 21:38:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:17:14.780 21:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:14.780 21:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:17:14.780 21:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:17:14.780 21:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:14.780 Attaching 4 probes... 00:17:14.780 @path[10.0.0.2, 4421]: 18287 00:17:14.780 @path[10.0.0.2, 4421]: 17944 00:17:14.780 @path[10.0.0.2, 4421]: 18079 00:17:14.780 @path[10.0.0.2, 4421]: 17413 00:17:14.780 @path[10.0.0.2, 4421]: 18885 00:17:14.780 21:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:14.780 21:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:17:14.780 21:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:17:14.780 21:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:17:14.780 21:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:17:14.780 21:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:17:14.780 21:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 79982 00:17:14.780 21:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:14.780 21:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:17:14.780 21:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:17:14.780 21:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:17:15.039 21:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:17:15.039 21:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80100 00:17:15.039 21:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 79881 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:15.039 21:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:17:21.602 21:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:21.602 21:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:17:21.603 21:39:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:17:21.603 21:39:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:21.603 Attaching 4 probes... 00:17:21.603 @path[10.0.0.2, 4420]: 18126 00:17:21.603 @path[10.0.0.2, 4420]: 17388 00:17:21.603 @path[10.0.0.2, 4420]: 16950 00:17:21.603 @path[10.0.0.2, 4420]: 17808 00:17:21.603 @path[10.0.0.2, 4420]: 18496 00:17:21.603 21:39:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:21.603 21:39:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:17:21.603 21:39:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:17:21.603 21:39:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:17:21.603 21:39:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:17:21.603 21:39:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:17:21.603 21:39:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80100 00:17:21.603 21:39:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:21.603 21:39:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:17:21.603 21:39:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:17:21.603 21:39:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:17:21.861 21:39:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:17:21.861 21:39:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 79881 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:21.861 21:39:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80212 00:17:21.861 21:39:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:17:28.422 21:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:28.422 21:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:17:28.422 21:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:17:28.422 21:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:28.422 Attaching 4 probes... 00:17:28.422 @path[10.0.0.2, 4421]: 11722 00:17:28.422 @path[10.0.0.2, 4421]: 14936 00:17:28.422 @path[10.0.0.2, 4421]: 15319 00:17:28.422 @path[10.0.0.2, 4421]: 14457 00:17:28.422 @path[10.0.0.2, 4421]: 16890 00:17:28.422 21:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:28.422 21:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:17:28.422 21:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:17:28.422 21:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:17:28.422 21:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:17:28.422 21:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:17:28.422 21:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80212 00:17:28.422 21:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:28.422 21:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:17:28.422 21:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:17:28.422 21:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:17:28.680 21:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:17:28.680 21:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80325 00:17:28.680 21:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:17:28.680 21:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 79881 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:35.245 21:39:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:35.245 21:39:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:17:35.245 21:39:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:17:35.245 21:39:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:35.245 Attaching 4 probes... 00:17:35.245 00:17:35.245 00:17:35.245 00:17:35.245 00:17:35.245 00:17:35.245 21:39:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:17:35.245 21:39:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:35.245 21:39:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:17:35.245 21:39:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:17:35.245 21:39:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:17:35.245 21:39:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:17:35.245 21:39:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80325 00:17:35.245 21:39:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:35.245 21:39:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:17:35.245 21:39:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:17:35.245 21:39:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:17:35.503 21:39:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:17:35.503 21:39:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 79881 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:35.503 21:39:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80443 00:17:35.503 21:39:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:17:42.064 21:39:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:42.064 21:39:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:17:42.064 21:39:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:17:42.064 21:39:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:42.064 Attaching 4 probes... 00:17:42.064 @path[10.0.0.2, 4421]: 17534 00:17:42.064 @path[10.0.0.2, 4421]: 17717 00:17:42.064 @path[10.0.0.2, 4421]: 15696 00:17:42.064 @path[10.0.0.2, 4421]: 15260 00:17:42.064 @path[10.0.0.2, 4421]: 18144 00:17:42.064 21:39:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:42.064 21:39:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:17:42.064 21:39:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:17:42.064 21:39:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:17:42.064 21:39:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:17:42.064 21:39:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:17:42.064 21:39:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80443 00:17:42.064 21:39:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:42.064 21:39:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:42.064 21:39:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:17:42.999 21:39:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:17:42.999 21:39:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80561 00:17:42.999 21:39:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 79881 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:42.999 21:39:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:17:49.565 21:39:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:49.565 21:39:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:17:49.565 21:39:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:17:49.565 21:39:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:49.565 Attaching 4 probes... 00:17:49.565 @path[10.0.0.2, 4420]: 17591 00:17:49.565 @path[10.0.0.2, 4420]: 14392 00:17:49.565 @path[10.0.0.2, 4420]: 15071 00:17:49.565 @path[10.0.0.2, 4420]: 18552 00:17:49.565 @path[10.0.0.2, 4420]: 19551 00:17:49.565 21:39:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:49.565 21:39:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:17:49.565 21:39:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:17:49.565 21:39:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:17:49.565 21:39:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:17:49.565 21:39:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:17:49.565 21:39:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80561 00:17:49.565 21:39:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:49.565 21:39:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:49.565 [2024-07-24 21:39:34.312054] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:17:49.565 21:39:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:17:49.824 21:39:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:17:56.432 21:39:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:17:56.432 21:39:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80735 00:17:56.432 21:39:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 79881 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:56.432 21:39:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:01.702 21:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:01.703 21:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:18:02.285 21:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:18:02.285 21:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:02.285 Attaching 4 probes... 00:18:02.285 @path[10.0.0.2, 4421]: 19343 00:18:02.285 @path[10.0.0.2, 4421]: 19478 00:18:02.285 @path[10.0.0.2, 4421]: 19313 00:18:02.285 @path[10.0.0.2, 4421]: 18097 00:18:02.285 @path[10.0.0.2, 4421]: 18087 00:18:02.285 21:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:02.285 21:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:02.285 21:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:02.285 21:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:18:02.285 21:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:18:02.285 21:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:18:02.285 21:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80735 00:18:02.285 21:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:02.285 21:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 79937 00:18:02.285 21:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@950 -- # '[' -z 79937 ']' 00:18:02.285 21:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # kill -0 79937 00:18:02.285 21:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@955 -- # uname 00:18:02.285 21:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:02.285 21:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79937 00:18:02.285 21:39:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:18:02.285 21:39:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:18:02.285 killing process with pid 79937 00:18:02.285 21:39:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79937' 00:18:02.285 21:39:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@969 -- # kill 79937 00:18:02.285 21:39:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@974 -- # wait 79937 00:18:02.285 Connection closed with partial response: 00:18:02.285 00:18:02.285 00:18:02.285 21:39:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 79937 00:18:02.285 21:39:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:02.285 [2024-07-24 21:38:49.738297] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:18:02.285 [2024-07-24 21:38:49.738386] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79937 ] 00:18:02.285 [2024-07-24 21:38:49.877992] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:02.285 [2024-07-24 21:38:49.992979] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:02.285 [2024-07-24 21:38:50.052527] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:02.285 Running I/O for 90 seconds... 00:18:02.285 [2024-07-24 21:38:59.824433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:57568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.285 [2024-07-24 21:38:59.824576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:02.285 [2024-07-24 21:38:59.824689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:57576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.285 [2024-07-24 21:38:59.824711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:02.285 [2024-07-24 21:38:59.824765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:57584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.285 [2024-07-24 21:38:59.824781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:02.285 [2024-07-24 21:38:59.824817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:57592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.285 [2024-07-24 21:38:59.824832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:02.285 [2024-07-24 21:38:59.824852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:57600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.285 [2024-07-24 21:38:59.824866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:02.285 [2024-07-24 21:38:59.824886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:57608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.285 [2024-07-24 21:38:59.824901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:02.285 [2024-07-24 21:38:59.824921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:57616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.285 [2024-07-24 21:38:59.824935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:02.285 [2024-07-24 21:38:59.824966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:57624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.285 [2024-07-24 21:38:59.824980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:02.285 [2024-07-24 21:38:59.825351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:57632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.285 [2024-07-24 21:38:59.825421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:02.285 [2024-07-24 21:38:59.825444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:57640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.285 [2024-07-24 21:38:59.825459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:02.285 [2024-07-24 21:38:59.825478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:57648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.285 [2024-07-24 21:38:59.825515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:02.285 [2024-07-24 21:38:59.825535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:57656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.285 [2024-07-24 21:38:59.825548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:02.285 [2024-07-24 21:38:59.825566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:57664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.285 [2024-07-24 21:38:59.825579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:02.285 [2024-07-24 21:38:59.825596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:57672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.285 [2024-07-24 21:38:59.825609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:02.285 [2024-07-24 21:38:59.825628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:56992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.285 [2024-07-24 21:38:59.825641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:02.285 [2024-07-24 21:38:59.825659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:57000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.285 [2024-07-24 21:38:59.825672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:02.285 [2024-07-24 21:38:59.825690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:57008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.285 [2024-07-24 21:38:59.825702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:02.285 [2024-07-24 21:38:59.825737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:57016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.286 [2024-07-24 21:38:59.825749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:02.286 [2024-07-24 21:38:59.825782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:57024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.286 [2024-07-24 21:38:59.825798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:02.286 [2024-07-24 21:38:59.825818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:57032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.286 [2024-07-24 21:38:59.825832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:02.286 [2024-07-24 21:38:59.825850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:57040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.286 [2024-07-24 21:38:59.825863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:02.286 [2024-07-24 21:38:59.825881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:57048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.286 [2024-07-24 21:38:59.825894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:02.286 [2024-07-24 21:38:59.825912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:57056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.286 [2024-07-24 21:38:59.825934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:02.286 [2024-07-24 21:38:59.825955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:57064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.286 [2024-07-24 21:38:59.825968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:02.286 [2024-07-24 21:38:59.825987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:57072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.286 [2024-07-24 21:38:59.826000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:02.286 [2024-07-24 21:38:59.826036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:57080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.286 [2024-07-24 21:38:59.826050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:02.286 [2024-07-24 21:38:59.826072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:57088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.286 [2024-07-24 21:38:59.826087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:02.286 [2024-07-24 21:38:59.826108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:57096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.286 [2024-07-24 21:38:59.826123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:02.286 [2024-07-24 21:38:59.826144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:57104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.286 [2024-07-24 21:38:59.826159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:02.286 [2024-07-24 21:38:59.826180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:57112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.286 [2024-07-24 21:38:59.826195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:02.286 [2024-07-24 21:38:59.826217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:57680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.286 [2024-07-24 21:38:59.826234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:02.286 [2024-07-24 21:38:59.826256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:57688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.286 [2024-07-24 21:38:59.826271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:02.286 [2024-07-24 21:38:59.826867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:57696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.286 [2024-07-24 21:38:59.826894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:02.286 [2024-07-24 21:38:59.826918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:57704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.286 [2024-07-24 21:38:59.826933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:02.286 [2024-07-24 21:38:59.826952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:57712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.286 [2024-07-24 21:38:59.826965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:02.286 [2024-07-24 21:38:59.827000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:57720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.286 [2024-07-24 21:38:59.827031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:02.286 [2024-07-24 21:38:59.827100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:57728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.286 [2024-07-24 21:38:59.827128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:02.286 [2024-07-24 21:38:59.827150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:57736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.286 [2024-07-24 21:38:59.827164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:02.286 [2024-07-24 21:38:59.827186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:57744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.286 [2024-07-24 21:38:59.827200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:02.286 [2024-07-24 21:38:59.827222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:57752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.286 [2024-07-24 21:38:59.827237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:02.286 [2024-07-24 21:38:59.827258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:57760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.286 [2024-07-24 21:38:59.827273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:02.286 [2024-07-24 21:38:59.827294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:57768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.286 [2024-07-24 21:38:59.827309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:02.286 [2024-07-24 21:38:59.827331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:57776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.286 [2024-07-24 21:38:59.827346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:02.286 [2024-07-24 21:38:59.827367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:57120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.286 [2024-07-24 21:38:59.827382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:02.286 [2024-07-24 21:38:59.827411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:57128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.286 [2024-07-24 21:38:59.827455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:02.286 [2024-07-24 21:38:59.827489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:57136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.286 [2024-07-24 21:38:59.827502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:02.286 [2024-07-24 21:38:59.827520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:57144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.286 [2024-07-24 21:38:59.827540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:02.286 [2024-07-24 21:38:59.827567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:57152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.286 [2024-07-24 21:38:59.827581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:02.286 [2024-07-24 21:38:59.827599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:57160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.286 [2024-07-24 21:38:59.827611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:02.286 [2024-07-24 21:38:59.827630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:57168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.286 [2024-07-24 21:38:59.827642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:02.286 [2024-07-24 21:38:59.827661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:57176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.286 [2024-07-24 21:38:59.827689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:02.286 [2024-07-24 21:38:59.827709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:57184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.286 [2024-07-24 21:38:59.827738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:02.286 [2024-07-24 21:38:59.827757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:57192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.286 [2024-07-24 21:38:59.827770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:02.286 [2024-07-24 21:38:59.827789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:57200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.286 [2024-07-24 21:38:59.827802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:02.286 [2024-07-24 21:38:59.827820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:57208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.286 [2024-07-24 21:38:59.827833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:02.286 [2024-07-24 21:38:59.827852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:57216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.287 [2024-07-24 21:38:59.827865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:02.287 [2024-07-24 21:38:59.827883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:57224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.287 [2024-07-24 21:38:59.827896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:02.287 [2024-07-24 21:38:59.827914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:57232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.287 [2024-07-24 21:38:59.827927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:02.287 [2024-07-24 21:38:59.827946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:57240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.287 [2024-07-24 21:38:59.827959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:02.287 [2024-07-24 21:38:59.827986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:57784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.287 [2024-07-24 21:38:59.828000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:02.287 [2024-07-24 21:38:59.828035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:57792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.287 [2024-07-24 21:38:59.828072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:02.287 [2024-07-24 21:38:59.828094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:57800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.287 [2024-07-24 21:38:59.828109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:02.287 [2024-07-24 21:38:59.828131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:57808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.287 [2024-07-24 21:38:59.828146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:02.287 [2024-07-24 21:38:59.828169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:57816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.287 [2024-07-24 21:38:59.828184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:02.287 [2024-07-24 21:38:59.829041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:57824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.287 [2024-07-24 21:38:59.829070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:02.287 [2024-07-24 21:38:59.829097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:57832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.287 [2024-07-24 21:38:59.829115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:02.287 [2024-07-24 21:38:59.829137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:57840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.287 [2024-07-24 21:38:59.829151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:02.287 [2024-07-24 21:38:59.829174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:57848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.287 [2024-07-24 21:38:59.829188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:02.287 [2024-07-24 21:38:59.829210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:57856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.287 [2024-07-24 21:38:59.829225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:02.287 [2024-07-24 21:38:59.829246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:57864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.287 [2024-07-24 21:38:59.829263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:02.287 [2024-07-24 21:38:59.829295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:57872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.287 [2024-07-24 21:38:59.829309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:02.287 [2024-07-24 21:38:59.829331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:57880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.287 [2024-07-24 21:38:59.829465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:02.287 [2024-07-24 21:38:59.829488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:57248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.287 [2024-07-24 21:38:59.829501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:02.287 [2024-07-24 21:38:59.829519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:57256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.287 [2024-07-24 21:38:59.829533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:02.287 [2024-07-24 21:38:59.829552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:57264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.287 [2024-07-24 21:38:59.829565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:02.287 [2024-07-24 21:38:59.829583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:57272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.287 [2024-07-24 21:38:59.829596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:02.287 [2024-07-24 21:38:59.829614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:57280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.287 [2024-07-24 21:38:59.829626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:02.287 [2024-07-24 21:38:59.829645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:57288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.287 [2024-07-24 21:38:59.829666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:02.287 [2024-07-24 21:38:59.829685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:57296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.287 [2024-07-24 21:38:59.829704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:02.287 [2024-07-24 21:38:59.829750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:57304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.287 [2024-07-24 21:38:59.829773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:02.287 [2024-07-24 21:38:59.829793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:57312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.287 [2024-07-24 21:38:59.829806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:02.287 [2024-07-24 21:38:59.829827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:57320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.287 [2024-07-24 21:38:59.829840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:02.287 [2024-07-24 21:38:59.829859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:57328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.287 [2024-07-24 21:38:59.829873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:02.287 [2024-07-24 21:38:59.829892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:57336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.287 [2024-07-24 21:38:59.829913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:02.287 [2024-07-24 21:38:59.829934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:57344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.287 [2024-07-24 21:38:59.829947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:02.287 [2024-07-24 21:38:59.829966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:57352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.287 [2024-07-24 21:38:59.829980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:02.287 [2024-07-24 21:38:59.829999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:57360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.287 [2024-07-24 21:38:59.830028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:02.287 [2024-07-24 21:38:59.830066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:57368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.287 [2024-07-24 21:38:59.830081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:02.287 [2024-07-24 21:38:59.830103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:57376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.287 [2024-07-24 21:38:59.830117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:02.287 [2024-07-24 21:38:59.830139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:57384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.287 [2024-07-24 21:38:59.830154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:02.287 [2024-07-24 21:38:59.830175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:57392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.287 [2024-07-24 21:38:59.830191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:02.287 [2024-07-24 21:38:59.830213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:57400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.288 [2024-07-24 21:38:59.830228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:02.288 [2024-07-24 21:38:59.830249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:57408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.288 [2024-07-24 21:38:59.830264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:02.288 [2024-07-24 21:38:59.830293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:57416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.288 [2024-07-24 21:38:59.830308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:02.288 [2024-07-24 21:38:59.830330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:57424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.288 [2024-07-24 21:38:59.830354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:02.288 [2024-07-24 21:38:59.830391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:57432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.288 [2024-07-24 21:38:59.830421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:02.288 [2024-07-24 21:38:59.830465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:57888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.288 [2024-07-24 21:38:59.830480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:02.288 [2024-07-24 21:38:59.830500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:57896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.288 [2024-07-24 21:38:59.830512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:02.288 [2024-07-24 21:38:59.830531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:57904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.288 [2024-07-24 21:38:59.830543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:02.288 [2024-07-24 21:38:59.830562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:57912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.288 [2024-07-24 21:38:59.830575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:02.288 [2024-07-24 21:38:59.830593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:57920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.288 [2024-07-24 21:38:59.830607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:02.288 [2024-07-24 21:38:59.830624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:57928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.288 [2024-07-24 21:38:59.830637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:02.288 [2024-07-24 21:38:59.830655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:57936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.288 [2024-07-24 21:38:59.830668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:02.288 [2024-07-24 21:38:59.830686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:57944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.288 [2024-07-24 21:38:59.830709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:02.288 [2024-07-24 21:38:59.830746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:57440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.288 [2024-07-24 21:38:59.830760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:02.288 [2024-07-24 21:38:59.830779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:57448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.288 [2024-07-24 21:38:59.830792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:02.288 [2024-07-24 21:38:59.830810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:57456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.288 [2024-07-24 21:38:59.830824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:02.288 [2024-07-24 21:38:59.830843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:57464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.288 [2024-07-24 21:38:59.830856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:02.288 [2024-07-24 21:38:59.830883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:57472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.288 [2024-07-24 21:38:59.830897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:02.288 [2024-07-24 21:38:59.830915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:57480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.288 [2024-07-24 21:38:59.830928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:02.288 [2024-07-24 21:38:59.830947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:57488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.288 [2024-07-24 21:38:59.830966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:02.288 [2024-07-24 21:38:59.830985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:57496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.288 [2024-07-24 21:38:59.830998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:02.288 [2024-07-24 21:38:59.831738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:57952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.288 [2024-07-24 21:38:59.831763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:02.288 [2024-07-24 21:38:59.831788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:57960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.288 [2024-07-24 21:38:59.831803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:02.288 [2024-07-24 21:38:59.831822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:57968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.288 [2024-07-24 21:38:59.831836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:02.288 [2024-07-24 21:38:59.831855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:57976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.288 [2024-07-24 21:38:59.831868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:02.288 [2024-07-24 21:38:59.831887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:57984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.288 [2024-07-24 21:38:59.831900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.288 [2024-07-24 21:38:59.831918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:57992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.288 [2024-07-24 21:38:59.831931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.288 [2024-07-24 21:38:59.831950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:58000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.288 [2024-07-24 21:38:59.831963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:02.288 [2024-07-24 21:38:59.831983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:58008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.288 [2024-07-24 21:38:59.831996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:02.288 [2024-07-24 21:38:59.832030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:57504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.288 [2024-07-24 21:38:59.832073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:02.288 [2024-07-24 21:38:59.832096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:57512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.288 [2024-07-24 21:38:59.832112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:02.288 [2024-07-24 21:38:59.832134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:57520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.288 [2024-07-24 21:38:59.832149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:02.288 [2024-07-24 21:38:59.832171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:57528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.288 [2024-07-24 21:38:59.832185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:02.288 [2024-07-24 21:38:59.832206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:57536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.288 [2024-07-24 21:38:59.832221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:02.288 [2024-07-24 21:38:59.832243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:57544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.288 [2024-07-24 21:38:59.832257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:02.288 [2024-07-24 21:38:59.832278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:57552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.288 [2024-07-24 21:38:59.832299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:02.288 [2024-07-24 21:38:59.832322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:57560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.288 [2024-07-24 21:38:59.832337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:02.288 [2024-07-24 21:39:06.380703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:129680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.289 [2024-07-24 21:39:06.380758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:02.289 [2024-07-24 21:39:06.380831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:129688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.289 [2024-07-24 21:39:06.380849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:02.289 [2024-07-24 21:39:06.380869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:129696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.289 [2024-07-24 21:39:06.380882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:02.289 [2024-07-24 21:39:06.380899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:129704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.289 [2024-07-24 21:39:06.380911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:02.289 [2024-07-24 21:39:06.380929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:129712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.289 [2024-07-24 21:39:06.380961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:02.289 [2024-07-24 21:39:06.380981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:129720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.289 [2024-07-24 21:39:06.380993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:02.289 [2024-07-24 21:39:06.381010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:129728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.289 [2024-07-24 21:39:06.381022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:02.289 [2024-07-24 21:39:06.381039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:129736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.289 [2024-07-24 21:39:06.381051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:02.289 [2024-07-24 21:39:06.381068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:129168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.289 [2024-07-24 21:39:06.381080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:02.289 [2024-07-24 21:39:06.381097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:129176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.289 [2024-07-24 21:39:06.381108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:02.289 [2024-07-24 21:39:06.381126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:129184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.289 [2024-07-24 21:39:06.381138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:02.289 [2024-07-24 21:39:06.381155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:129192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.289 [2024-07-24 21:39:06.381168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:02.289 [2024-07-24 21:39:06.381186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:129200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.289 [2024-07-24 21:39:06.381198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:02.289 [2024-07-24 21:39:06.381215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:129208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.289 [2024-07-24 21:39:06.381226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:02.289 [2024-07-24 21:39:06.381243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:129216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.289 [2024-07-24 21:39:06.381255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:02.289 [2024-07-24 21:39:06.381272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:129224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.289 [2024-07-24 21:39:06.381283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:02.289 [2024-07-24 21:39:06.381301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:129232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.289 [2024-07-24 21:39:06.381312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:02.289 [2024-07-24 21:39:06.381338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:129240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.289 [2024-07-24 21:39:06.381351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:02.289 [2024-07-24 21:39:06.381369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:129248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.289 [2024-07-24 21:39:06.381381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:02.289 [2024-07-24 21:39:06.381398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:129256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.289 [2024-07-24 21:39:06.381410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:02.289 [2024-07-24 21:39:06.381428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:129264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.289 [2024-07-24 21:39:06.381440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:02.289 [2024-07-24 21:39:06.381457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:129272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.289 [2024-07-24 21:39:06.381469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:02.289 [2024-07-24 21:39:06.381487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:129280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.289 [2024-07-24 21:39:06.381499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:02.289 [2024-07-24 21:39:06.381517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:129288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.289 [2024-07-24 21:39:06.381528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:02.289 [2024-07-24 21:39:06.381548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:129296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.289 [2024-07-24 21:39:06.381560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:02.289 [2024-07-24 21:39:06.381578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:129304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.289 [2024-07-24 21:39:06.381590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:02.289 [2024-07-24 21:39:06.381607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:129312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.289 [2024-07-24 21:39:06.381619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:02.289 [2024-07-24 21:39:06.381656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:129320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.289 [2024-07-24 21:39:06.381671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:02.289 [2024-07-24 21:39:06.381690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:129328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.289 [2024-07-24 21:39:06.381702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:02.289 [2024-07-24 21:39:06.381727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:129336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.290 [2024-07-24 21:39:06.381740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:02.290 [2024-07-24 21:39:06.381758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:129344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.290 [2024-07-24 21:39:06.381770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:02.290 [2024-07-24 21:39:06.381788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:129352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.290 [2024-07-24 21:39:06.381810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:02.290 [2024-07-24 21:39:06.381832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:129744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.290 [2024-07-24 21:39:06.381845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:02.290 [2024-07-24 21:39:06.381863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:129752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.290 [2024-07-24 21:39:06.381876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:02.290 [2024-07-24 21:39:06.381894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:129760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.290 [2024-07-24 21:39:06.381905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:02.290 [2024-07-24 21:39:06.381923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:129768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.290 [2024-07-24 21:39:06.381935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:02.290 [2024-07-24 21:39:06.381952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:129776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.290 [2024-07-24 21:39:06.381964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.290 [2024-07-24 21:39:06.381982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:129784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.290 [2024-07-24 21:39:06.381994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.290 [2024-07-24 21:39:06.382011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:129792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.290 [2024-07-24 21:39:06.382023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:02.290 [2024-07-24 21:39:06.382041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:129800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.290 [2024-07-24 21:39:06.382053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:02.290 [2024-07-24 21:39:06.382070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:129360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.290 [2024-07-24 21:39:06.382082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:02.290 [2024-07-24 21:39:06.382107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:129368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.290 [2024-07-24 21:39:06.382120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:02.290 [2024-07-24 21:39:06.382137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:129376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.290 [2024-07-24 21:39:06.382153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:02.290 [2024-07-24 21:39:06.382170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:129384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.290 [2024-07-24 21:39:06.382181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:02.290 [2024-07-24 21:39:06.382199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:129392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.290 [2024-07-24 21:39:06.382211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:02.290 [2024-07-24 21:39:06.382228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:129400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.290 [2024-07-24 21:39:06.382241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:02.290 [2024-07-24 21:39:06.382259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:129408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.290 [2024-07-24 21:39:06.382271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:02.290 [2024-07-24 21:39:06.382288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:129416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.290 [2024-07-24 21:39:06.382300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:02.290 [2024-07-24 21:39:06.382317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:129424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.290 [2024-07-24 21:39:06.382329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:02.290 [2024-07-24 21:39:06.382347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:129432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.290 [2024-07-24 21:39:06.382359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:02.290 [2024-07-24 21:39:06.382376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:129440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.290 [2024-07-24 21:39:06.382388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:02.290 [2024-07-24 21:39:06.382405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:129448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.290 [2024-07-24 21:39:06.382417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:02.290 [2024-07-24 21:39:06.382434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:129456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.290 [2024-07-24 21:39:06.382446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:02.290 [2024-07-24 21:39:06.382471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:129464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.290 [2024-07-24 21:39:06.382489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:02.290 [2024-07-24 21:39:06.382507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:129472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.290 [2024-07-24 21:39:06.382520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:02.290 [2024-07-24 21:39:06.382543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:129480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.290 [2024-07-24 21:39:06.382555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:02.290 [2024-07-24 21:39:06.382575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:129808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.290 [2024-07-24 21:39:06.382588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:02.290 [2024-07-24 21:39:06.382606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:129816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.290 [2024-07-24 21:39:06.382618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:02.290 [2024-07-24 21:39:06.382674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:129824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.290 [2024-07-24 21:39:06.382687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:02.290 [2024-07-24 21:39:06.382705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:129832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.290 [2024-07-24 21:39:06.382717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:02.290 [2024-07-24 21:39:06.382735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:129840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.290 [2024-07-24 21:39:06.382747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:02.290 [2024-07-24 21:39:06.382765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:129848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.290 [2024-07-24 21:39:06.382778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:02.290 [2024-07-24 21:39:06.382796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:129856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.290 [2024-07-24 21:39:06.382809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:02.290 [2024-07-24 21:39:06.382827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:129864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.290 [2024-07-24 21:39:06.382839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:02.290 [2024-07-24 21:39:06.382857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:129872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.290 [2024-07-24 21:39:06.382869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:02.290 [2024-07-24 21:39:06.382887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:129880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.290 [2024-07-24 21:39:06.382926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:02.290 [2024-07-24 21:39:06.382948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:129888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.290 [2024-07-24 21:39:06.382976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:02.290 [2024-07-24 21:39:06.382994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:129896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.290 [2024-07-24 21:39:06.383006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:02.291 [2024-07-24 21:39:06.383023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:129904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.291 [2024-07-24 21:39:06.383035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:02.291 [2024-07-24 21:39:06.383052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:129912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.291 [2024-07-24 21:39:06.383073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:02.291 [2024-07-24 21:39:06.383109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:129920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.291 [2024-07-24 21:39:06.383122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:02.291 [2024-07-24 21:39:06.383141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:129928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.291 [2024-07-24 21:39:06.383153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:02.291 [2024-07-24 21:39:06.383171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:129488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.291 [2024-07-24 21:39:06.383184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:02.291 [2024-07-24 21:39:06.383202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:129496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.291 [2024-07-24 21:39:06.383215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:02.291 [2024-07-24 21:39:06.383233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:129504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.291 [2024-07-24 21:39:06.383245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:02.291 [2024-07-24 21:39:06.383264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:129512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.291 [2024-07-24 21:39:06.383276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:02.291 [2024-07-24 21:39:06.383295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:129520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.291 [2024-07-24 21:39:06.383307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:02.291 [2024-07-24 21:39:06.383325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:129528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.291 [2024-07-24 21:39:06.383339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:02.291 [2024-07-24 21:39:06.383381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:129536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.291 [2024-07-24 21:39:06.383394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:02.291 [2024-07-24 21:39:06.383412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:129544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.291 [2024-07-24 21:39:06.383424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:02.291 [2024-07-24 21:39:06.383442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:129936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.291 [2024-07-24 21:39:06.383454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:02.291 [2024-07-24 21:39:06.383472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:129944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.291 [2024-07-24 21:39:06.383484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:02.291 [2024-07-24 21:39:06.383502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:129952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.291 [2024-07-24 21:39:06.383514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:02.291 [2024-07-24 21:39:06.383548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:129960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.291 [2024-07-24 21:39:06.383560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:02.291 [2024-07-24 21:39:06.383586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:129968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.291 [2024-07-24 21:39:06.383604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:02.291 [2024-07-24 21:39:06.383622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:129976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.291 [2024-07-24 21:39:06.383633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:02.291 [2024-07-24 21:39:06.383651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:129984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.291 [2024-07-24 21:39:06.383663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:02.291 [2024-07-24 21:39:06.383696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:129992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.291 [2024-07-24 21:39:06.383709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:02.291 [2024-07-24 21:39:06.383730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:130000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.291 [2024-07-24 21:39:06.383743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:02.291 [2024-07-24 21:39:06.383761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:130008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.291 [2024-07-24 21:39:06.383773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:02.291 [2024-07-24 21:39:06.383799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:130016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.291 [2024-07-24 21:39:06.383811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:02.291 [2024-07-24 21:39:06.383829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:130024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.291 [2024-07-24 21:39:06.383841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:02.291 [2024-07-24 21:39:06.383859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:130032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.291 [2024-07-24 21:39:06.383870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:02.291 [2024-07-24 21:39:06.383888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:130040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.291 [2024-07-24 21:39:06.383900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:02.291 [2024-07-24 21:39:06.383917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:130048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.291 [2024-07-24 21:39:06.383929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:02.291 [2024-07-24 21:39:06.383947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:130056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.291 [2024-07-24 21:39:06.383958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:02.291 [2024-07-24 21:39:06.383976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:129552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.291 [2024-07-24 21:39:06.383987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:02.291 [2024-07-24 21:39:06.384005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:129560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.291 [2024-07-24 21:39:06.384017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:02.291 [2024-07-24 21:39:06.384034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:129568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.291 [2024-07-24 21:39:06.384046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:02.291 [2024-07-24 21:39:06.384064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:129576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.291 [2024-07-24 21:39:06.384076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:02.291 [2024-07-24 21:39:06.384098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:129584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.291 [2024-07-24 21:39:06.384110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:02.291 [2024-07-24 21:39:06.384128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:129592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.291 [2024-07-24 21:39:06.384139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:02.291 [2024-07-24 21:39:06.384175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:129600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.291 [2024-07-24 21:39:06.384188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:02.291 [2024-07-24 21:39:06.384212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:129608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.291 [2024-07-24 21:39:06.384247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:02.291 [2024-07-24 21:39:06.384265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:129616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.291 [2024-07-24 21:39:06.384278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:02.291 [2024-07-24 21:39:06.384296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:129624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.291 [2024-07-24 21:39:06.384308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:02.291 [2024-07-24 21:39:06.384326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:129632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.292 [2024-07-24 21:39:06.384338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:02.292 [2024-07-24 21:39:06.384356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:129640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.292 [2024-07-24 21:39:06.384368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:02.292 [2024-07-24 21:39:06.384386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:129648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.292 [2024-07-24 21:39:06.384398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:02.292 [2024-07-24 21:39:06.384416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:129656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.292 [2024-07-24 21:39:06.384428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:02.292 [2024-07-24 21:39:06.384447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:129664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.292 [2024-07-24 21:39:06.384459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:02.292 [2024-07-24 21:39:06.385205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:129672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.292 [2024-07-24 21:39:06.385233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:02.292 [2024-07-24 21:39:06.385265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:130064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.292 [2024-07-24 21:39:06.385280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:02.292 [2024-07-24 21:39:06.385307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:130072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.292 [2024-07-24 21:39:06.385321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:02.292 [2024-07-24 21:39:06.385371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:130080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.292 [2024-07-24 21:39:06.385413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:02.292 [2024-07-24 21:39:06.385440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:130088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.292 [2024-07-24 21:39:06.385453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:02.292 [2024-07-24 21:39:06.385485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:130096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.292 [2024-07-24 21:39:06.385499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:02.292 [2024-07-24 21:39:06.385524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:130104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.292 [2024-07-24 21:39:06.385537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:02.292 [2024-07-24 21:39:06.385562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:130112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.292 [2024-07-24 21:39:06.385575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:02.292 [2024-07-24 21:39:06.385614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:130120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.292 [2024-07-24 21:39:06.385631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:02.292 [2024-07-24 21:39:06.385657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:130128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.292 [2024-07-24 21:39:06.385670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:02.292 [2024-07-24 21:39:06.385708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:130136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.292 [2024-07-24 21:39:06.385723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:02.292 [2024-07-24 21:39:06.385755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:130144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.292 [2024-07-24 21:39:06.385768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:02.292 [2024-07-24 21:39:06.385794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:130152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.292 [2024-07-24 21:39:06.385806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:02.292 [2024-07-24 21:39:06.385831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:130160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.292 [2024-07-24 21:39:06.385844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:02.292 [2024-07-24 21:39:06.385869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:130168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.292 [2024-07-24 21:39:06.385882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:02.292 [2024-07-24 21:39:06.385908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:130176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.292 [2024-07-24 21:39:06.385929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:02.292 [2024-07-24 21:39:06.385959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:130184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.292 [2024-07-24 21:39:06.385973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:02.292 [2024-07-24 21:39:13.423032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:109720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.292 [2024-07-24 21:39:13.423124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:02.292 [2024-07-24 21:39:13.423160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:109728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.292 [2024-07-24 21:39:13.423177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:02.292 [2024-07-24 21:39:13.423200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:109736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.292 [2024-07-24 21:39:13.423215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:02.292 [2024-07-24 21:39:13.423235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:109744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.292 [2024-07-24 21:39:13.423249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:02.292 [2024-07-24 21:39:13.423270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:109752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.292 [2024-07-24 21:39:13.423284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:02.292 [2024-07-24 21:39:13.423304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:109760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.292 [2024-07-24 21:39:13.423318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:02.292 [2024-07-24 21:39:13.423339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:109768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.292 [2024-07-24 21:39:13.423353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:02.292 [2024-07-24 21:39:13.423374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:109776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.292 [2024-07-24 21:39:13.423388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:02.292 [2024-07-24 21:39:13.423463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:109784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.292 [2024-07-24 21:39:13.423477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:02.292 [2024-07-24 21:39:13.423525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:109792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.292 [2024-07-24 21:39:13.423537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:02.292 [2024-07-24 21:39:13.423555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:109800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.292 [2024-07-24 21:39:13.423567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:02.292 [2024-07-24 21:39:13.423605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:109808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.292 [2024-07-24 21:39:13.423618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:02.292 [2024-07-24 21:39:13.423635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:109816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.292 [2024-07-24 21:39:13.423647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:02.292 [2024-07-24 21:39:13.423664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:109824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.292 [2024-07-24 21:39:13.423675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:02.292 [2024-07-24 21:39:13.423692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:109832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.292 [2024-07-24 21:39:13.423704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:02.292 [2024-07-24 21:39:13.423738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:109840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.292 [2024-07-24 21:39:13.423752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:02.293 [2024-07-24 21:39:13.423773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:109848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.293 [2024-07-24 21:39:13.423786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:02.293 [2024-07-24 21:39:13.423806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:109856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.293 [2024-07-24 21:39:13.423818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:02.293 [2024-07-24 21:39:13.423835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:109864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.293 [2024-07-24 21:39:13.423847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:02.293 [2024-07-24 21:39:13.423864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:109872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.293 [2024-07-24 21:39:13.423876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:02.293 [2024-07-24 21:39:13.423893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:109272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.293 [2024-07-24 21:39:13.423940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:02.293 [2024-07-24 21:39:13.423969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:109280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.293 [2024-07-24 21:39:13.423982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:02.293 [2024-07-24 21:39:13.424008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:109288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.293 [2024-07-24 21:39:13.424022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:02.293 [2024-07-24 21:39:13.424054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:109296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.293 [2024-07-24 21:39:13.424085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:02.293 [2024-07-24 21:39:13.424111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:109304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.293 [2024-07-24 21:39:13.424125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:02.293 [2024-07-24 21:39:13.424144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:109312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.293 [2024-07-24 21:39:13.424168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:02.293 [2024-07-24 21:39:13.424189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:109320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.293 [2024-07-24 21:39:13.424210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:02.293 [2024-07-24 21:39:13.424256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:109328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.293 [2024-07-24 21:39:13.424280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:02.293 [2024-07-24 21:39:13.424312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:109880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.293 [2024-07-24 21:39:13.424350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:02.293 [2024-07-24 21:39:13.424382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:109888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.293 [2024-07-24 21:39:13.424408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:02.293 [2024-07-24 21:39:13.424450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:109896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.293 [2024-07-24 21:39:13.424479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:02.293 [2024-07-24 21:39:13.424499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:109904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.293 [2024-07-24 21:39:13.424516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:02.293 [2024-07-24 21:39:13.425012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:109912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.293 [2024-07-24 21:39:13.425047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:02.293 [2024-07-24 21:39:13.425072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:109920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.293 [2024-07-24 21:39:13.425086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:02.293 [2024-07-24 21:39:13.425106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:109928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.293 [2024-07-24 21:39:13.425119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:02.293 [2024-07-24 21:39:13.425206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:109936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.293 [2024-07-24 21:39:13.425224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:02.293 [2024-07-24 21:39:13.425243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:109944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.293 [2024-07-24 21:39:13.425256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:02.293 [2024-07-24 21:39:13.425274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:109952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.293 [2024-07-24 21:39:13.425286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:02.293 [2024-07-24 21:39:13.425304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:109960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.293 [2024-07-24 21:39:13.425316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:02.293 [2024-07-24 21:39:13.425333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:109968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.293 [2024-07-24 21:39:13.425345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:02.293 [2024-07-24 21:39:13.425363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:109976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.293 [2024-07-24 21:39:13.425375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:02.293 [2024-07-24 21:39:13.425393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:109984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.293 [2024-07-24 21:39:13.425405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:02.293 [2024-07-24 21:39:13.425423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:109992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.293 [2024-07-24 21:39:13.425435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:02.293 [2024-07-24 21:39:13.425454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:110000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.293 [2024-07-24 21:39:13.425466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:02.293 [2024-07-24 21:39:13.425483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:110008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.293 [2024-07-24 21:39:13.425495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:02.293 [2024-07-24 21:39:13.425514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:110016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.293 [2024-07-24 21:39:13.425526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:02.293 [2024-07-24 21:39:13.425544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:109336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.293 [2024-07-24 21:39:13.425556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:02.293 [2024-07-24 21:39:13.425574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:109344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.293 [2024-07-24 21:39:13.425606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:02.293 [2024-07-24 21:39:13.425639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:109352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.293 [2024-07-24 21:39:13.425654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:02.293 [2024-07-24 21:39:13.425674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:109360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.293 [2024-07-24 21:39:13.425686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:02.293 [2024-07-24 21:39:13.425704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:109368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.293 [2024-07-24 21:39:13.425717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:02.293 [2024-07-24 21:39:13.425734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:109376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.293 [2024-07-24 21:39:13.425746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:02.293 [2024-07-24 21:39:13.425764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:109384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.293 [2024-07-24 21:39:13.425776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:02.293 [2024-07-24 21:39:13.425794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:109392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.293 [2024-07-24 21:39:13.425806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:02.293 [2024-07-24 21:39:13.425824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:109400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.293 [2024-07-24 21:39:13.425836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:02.294 [2024-07-24 21:39:13.425854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:109408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.294 [2024-07-24 21:39:13.425865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:02.294 [2024-07-24 21:39:13.425883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:109416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.294 [2024-07-24 21:39:13.425895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:02.294 [2024-07-24 21:39:13.425913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:109424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.294 [2024-07-24 21:39:13.425959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:02.294 [2024-07-24 21:39:13.425980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:109432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.294 [2024-07-24 21:39:13.426013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:02.294 [2024-07-24 21:39:13.426054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:109440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.294 [2024-07-24 21:39:13.426100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:02.294 [2024-07-24 21:39:13.426123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:109448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.294 [2024-07-24 21:39:13.426136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:02.294 [2024-07-24 21:39:13.426155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:109456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.294 [2024-07-24 21:39:13.426168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:02.294 [2024-07-24 21:39:13.426187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:110024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.294 [2024-07-24 21:39:13.426199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:02.294 [2024-07-24 21:39:13.426218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:110032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.294 [2024-07-24 21:39:13.426231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:02.294 [2024-07-24 21:39:13.426254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:110040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.294 [2024-07-24 21:39:13.426268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:02.294 [2024-07-24 21:39:13.426287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:110048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.294 [2024-07-24 21:39:13.426315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:02.294 [2024-07-24 21:39:13.426334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:110056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.294 [2024-07-24 21:39:13.426346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:02.294 [2024-07-24 21:39:13.426363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:110064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.294 [2024-07-24 21:39:13.426375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:02.294 [2024-07-24 21:39:13.426394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:110072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.294 [2024-07-24 21:39:13.426406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:02.294 [2024-07-24 21:39:13.426425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:110080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.294 [2024-07-24 21:39:13.426437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:02.294 [2024-07-24 21:39:13.426455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:110088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.294 [2024-07-24 21:39:13.426467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:02.294 [2024-07-24 21:39:13.426485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:110096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.294 [2024-07-24 21:39:13.426508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:02.294 [2024-07-24 21:39:13.426565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:109464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.294 [2024-07-24 21:39:13.426581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:02.294 [2024-07-24 21:39:13.426600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:109472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.294 [2024-07-24 21:39:13.426620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:02.294 [2024-07-24 21:39:13.426683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:109480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.294 [2024-07-24 21:39:13.426702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:02.294 [2024-07-24 21:39:13.426728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:109488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.294 [2024-07-24 21:39:13.426743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:02.294 [2024-07-24 21:39:13.426772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:109496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.294 [2024-07-24 21:39:13.426789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:02.294 [2024-07-24 21:39:13.426811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:109504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.294 [2024-07-24 21:39:13.426835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:02.294 [2024-07-24 21:39:13.426867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:109512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.294 [2024-07-24 21:39:13.426892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:02.294 [2024-07-24 21:39:13.426922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:109520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.294 [2024-07-24 21:39:13.426944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:02.294 [2024-07-24 21:39:13.426975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:109528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.294 [2024-07-24 21:39:13.427014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:02.294 [2024-07-24 21:39:13.427048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:109536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.294 [2024-07-24 21:39:13.427062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:02.294 [2024-07-24 21:39:13.427109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:109544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.294 [2024-07-24 21:39:13.427124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:02.294 [2024-07-24 21:39:13.427145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:109552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.294 [2024-07-24 21:39:13.427159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:02.294 [2024-07-24 21:39:13.427192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:109560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.294 [2024-07-24 21:39:13.427207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:02.294 [2024-07-24 21:39:13.427227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:109568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.294 [2024-07-24 21:39:13.427241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:02.294 [2024-07-24 21:39:13.427262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:109576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.294 [2024-07-24 21:39:13.427276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:02.295 [2024-07-24 21:39:13.427297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:109584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.295 [2024-07-24 21:39:13.427321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:02.295 [2024-07-24 21:39:13.427362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:110104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.295 [2024-07-24 21:39:13.427387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:02.295 [2024-07-24 21:39:13.427446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:110112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.295 [2024-07-24 21:39:13.427461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:02.295 [2024-07-24 21:39:13.427503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:110120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.295 [2024-07-24 21:39:13.427515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:02.295 [2024-07-24 21:39:13.427534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:110128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.295 [2024-07-24 21:39:13.427547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:02.295 [2024-07-24 21:39:13.427567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:110136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.295 [2024-07-24 21:39:13.427579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:02.295 [2024-07-24 21:39:13.427598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.295 [2024-07-24 21:39:13.427612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:02.295 [2024-07-24 21:39:13.427646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:110152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.295 [2024-07-24 21:39:13.427701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:02.295 [2024-07-24 21:39:13.427724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:110160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.295 [2024-07-24 21:39:13.427737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:02.295 [2024-07-24 21:39:13.427770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:110168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.295 [2024-07-24 21:39:13.427783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:02.295 [2024-07-24 21:39:13.427801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:110176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.295 [2024-07-24 21:39:13.427813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:02.295 [2024-07-24 21:39:13.427831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:110184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.295 [2024-07-24 21:39:13.427843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:02.295 [2024-07-24 21:39:13.427861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:110192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.295 [2024-07-24 21:39:13.427881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:02.295 [2024-07-24 21:39:13.427900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:110200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.295 [2024-07-24 21:39:13.427912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:02.295 [2024-07-24 21:39:13.427930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:110208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.295 [2024-07-24 21:39:13.427942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:02.295 [2024-07-24 21:39:13.427959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:110216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.295 [2024-07-24 21:39:13.427971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:02.295 [2024-07-24 21:39:13.427989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:110224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.295 [2024-07-24 21:39:13.428001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:02.295 [2024-07-24 21:39:13.428019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:109592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.295 [2024-07-24 21:39:13.428031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:02.295 [2024-07-24 21:39:13.428059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:109600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.295 [2024-07-24 21:39:13.428077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:02.295 [2024-07-24 21:39:13.428095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:109608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.295 [2024-07-24 21:39:13.428107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.295 [2024-07-24 21:39:13.428124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:109616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.295 [2024-07-24 21:39:13.428136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.295 [2024-07-24 21:39:13.428154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:109624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.295 [2024-07-24 21:39:13.428173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:02.295 [2024-07-24 21:39:13.428191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:109632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.295 [2024-07-24 21:39:13.428204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:02.295 [2024-07-24 21:39:13.428222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:109640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.295 [2024-07-24 21:39:13.428234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:02.295 [2024-07-24 21:39:13.428252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:109648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.295 [2024-07-24 21:39:13.428264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:02.295 [2024-07-24 21:39:13.428283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:109656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.295 [2024-07-24 21:39:13.428294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:02.295 [2024-07-24 21:39:13.428312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:109664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.295 [2024-07-24 21:39:13.428324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:02.295 [2024-07-24 21:39:13.428342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:109672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.295 [2024-07-24 21:39:13.428354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:02.295 [2024-07-24 21:39:13.428371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:109680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.295 [2024-07-24 21:39:13.428388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:02.295 [2024-07-24 21:39:13.428406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:109688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.295 [2024-07-24 21:39:13.428418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:02.295 [2024-07-24 21:39:13.428436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:109696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.295 [2024-07-24 21:39:13.428448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:02.295 [2024-07-24 21:39:13.428466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:109704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.295 [2024-07-24 21:39:13.428478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:02.295 [2024-07-24 21:39:13.428504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:109712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.295 [2024-07-24 21:39:13.428516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:02.295 [2024-07-24 21:39:13.429234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:110232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.295 [2024-07-24 21:39:13.429270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:02.295 [2024-07-24 21:39:13.429294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:110240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.295 [2024-07-24 21:39:13.429307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:02.295 [2024-07-24 21:39:13.429326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:110248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.295 [2024-07-24 21:39:13.429338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:02.295 [2024-07-24 21:39:13.429356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:110256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.295 [2024-07-24 21:39:13.429368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:02.295 [2024-07-24 21:39:13.429386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:110264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.295 [2024-07-24 21:39:13.429398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:02.296 [2024-07-24 21:39:13.429416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:110272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.296 [2024-07-24 21:39:13.429429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:02.296 [2024-07-24 21:39:13.429446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:110280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.296 [2024-07-24 21:39:13.429459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:02.296 [2024-07-24 21:39:13.429476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:110288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.296 [2024-07-24 21:39:13.429488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:02.296 [2024-07-24 21:39:13.429506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:109720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.296 [2024-07-24 21:39:13.429518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:02.296 [2024-07-24 21:39:13.429536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:109728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.296 [2024-07-24 21:39:13.429548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:02.296 [2024-07-24 21:39:13.429565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:109736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.296 [2024-07-24 21:39:13.429577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:02.296 [2024-07-24 21:39:13.429595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:109744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.296 [2024-07-24 21:39:13.429614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:02.296 [2024-07-24 21:39:13.429647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:109752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.296 [2024-07-24 21:39:13.429661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:02.296 [2024-07-24 21:39:13.429686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:109760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.296 [2024-07-24 21:39:13.429700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:02.296 [2024-07-24 21:39:13.429718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:109768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.296 [2024-07-24 21:39:13.429729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:02.296 [2024-07-24 21:39:13.429747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:109776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.296 [2024-07-24 21:39:13.429759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:02.296 [2024-07-24 21:39:13.429777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:109784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.296 [2024-07-24 21:39:13.429789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:02.296 [2024-07-24 21:39:13.429807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:109792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.296 [2024-07-24 21:39:13.429818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:02.296 [2024-07-24 21:39:13.429836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:109800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.296 [2024-07-24 21:39:13.429848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:02.296 [2024-07-24 21:39:13.429866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:109808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.296 [2024-07-24 21:39:13.429878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:02.296 [2024-07-24 21:39:13.429896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:109816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.296 [2024-07-24 21:39:13.429908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:02.296 [2024-07-24 21:39:13.429926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:109824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.296 [2024-07-24 21:39:13.429944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:02.296 [2024-07-24 21:39:13.429962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:109832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.296 [2024-07-24 21:39:13.429974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:02.296 [2024-07-24 21:39:13.429992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:109840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.296 [2024-07-24 21:39:13.430004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:02.296 [2024-07-24 21:39:13.430022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:109848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.296 [2024-07-24 21:39:13.430034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:02.296 [2024-07-24 21:39:13.430059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:109856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.296 [2024-07-24 21:39:13.430071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:02.296 [2024-07-24 21:39:13.430094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:109864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.296 [2024-07-24 21:39:13.430113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:02.296 [2024-07-24 21:39:13.430131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:109872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.296 [2024-07-24 21:39:13.430143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:02.296 [2024-07-24 21:39:13.430161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:109272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.296 [2024-07-24 21:39:13.430173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:02.296 [2024-07-24 21:39:13.430191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:109280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.296 [2024-07-24 21:39:13.430215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:02.296 [2024-07-24 21:39:13.430232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:109288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.296 [2024-07-24 21:39:13.430245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:02.296 [2024-07-24 21:39:13.430262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:109296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.296 [2024-07-24 21:39:13.430274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:02.296 [2024-07-24 21:39:13.430293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:109304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.296 [2024-07-24 21:39:13.430304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:02.296 [2024-07-24 21:39:13.430322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:109312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.296 [2024-07-24 21:39:13.430334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:02.296 [2024-07-24 21:39:13.430353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:109320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.296 [2024-07-24 21:39:13.430365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:02.296 [2024-07-24 21:39:13.430399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:109328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.296 [2024-07-24 21:39:13.430416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:02.296 [2024-07-24 21:39:13.430435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:109880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.296 [2024-07-24 21:39:13.430447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:02.296 [2024-07-24 21:39:13.430473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:109888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.296 [2024-07-24 21:39:13.430494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:02.296 [2024-07-24 21:39:13.430522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:109896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.296 [2024-07-24 21:39:13.430534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:02.296 [2024-07-24 21:39:13.430552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:109904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.296 [2024-07-24 21:39:13.430564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:02.296 [2024-07-24 21:39:13.430581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:109912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.296 [2024-07-24 21:39:13.430593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:02.296 [2024-07-24 21:39:13.430611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:109920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.296 [2024-07-24 21:39:13.430650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:02.296 [2024-07-24 21:39:13.430688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:109928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.296 [2024-07-24 21:39:13.430701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:02.296 [2024-07-24 21:39:13.430720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:109936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.297 [2024-07-24 21:39:13.430733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:02.297 [2024-07-24 21:39:13.430752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:109944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.297 [2024-07-24 21:39:13.430765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:02.297 [2024-07-24 21:39:13.430784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:109952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.297 [2024-07-24 21:39:13.430797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:02.297 [2024-07-24 21:39:13.430816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:109960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.297 [2024-07-24 21:39:13.430828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:02.297 [2024-07-24 21:39:13.430847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:109968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.297 [2024-07-24 21:39:13.430860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:02.297 [2024-07-24 21:39:13.430879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:109976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.297 [2024-07-24 21:39:13.430892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:02.297 [2024-07-24 21:39:13.430911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:109984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.297 [2024-07-24 21:39:13.430930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:02.297 [2024-07-24 21:39:13.430951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:109992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.297 [2024-07-24 21:39:13.430964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:02.297 [2024-07-24 21:39:13.430987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:110000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.297 [2024-07-24 21:39:13.431001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:02.297 [2024-07-24 21:39:13.431020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:110008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.297 [2024-07-24 21:39:13.431048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:02.297 [2024-07-24 21:39:13.431121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:110016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.297 [2024-07-24 21:39:13.431141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:02.297 [2024-07-24 21:39:13.431162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:109336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.297 [2024-07-24 21:39:13.431176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:02.297 [2024-07-24 21:39:13.431198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:109344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.297 [2024-07-24 21:39:13.431212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:02.297 [2024-07-24 21:39:13.431233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:109352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.297 [2024-07-24 21:39:13.431246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:02.297 [2024-07-24 21:39:13.431267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:109360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.297 [2024-07-24 21:39:13.431281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:02.297 [2024-07-24 21:39:13.431302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:109368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.297 [2024-07-24 21:39:13.431315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:02.297 [2024-07-24 21:39:13.431337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:109376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.297 [2024-07-24 21:39:13.431351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:02.297 [2024-07-24 21:39:13.431371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:109384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.297 [2024-07-24 21:39:13.431385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:02.297 [2024-07-24 21:39:13.431426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:109392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.297 [2024-07-24 21:39:13.431460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:02.297 [2024-07-24 21:39:13.431480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:109400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.297 [2024-07-24 21:39:13.446447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:02.297 [2024-07-24 21:39:13.446495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:109408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.297 [2024-07-24 21:39:13.446512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:02.297 [2024-07-24 21:39:13.446532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:109416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.297 [2024-07-24 21:39:13.446545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:02.297 [2024-07-24 21:39:13.446563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:109424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.297 [2024-07-24 21:39:13.446575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:02.297 [2024-07-24 21:39:13.446593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:109432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.297 [2024-07-24 21:39:13.446605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:02.297 [2024-07-24 21:39:13.446650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:109440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.297 [2024-07-24 21:39:13.446667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:02.297 [2024-07-24 21:39:13.446687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:109448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.297 [2024-07-24 21:39:13.446700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:02.297 [2024-07-24 21:39:13.446720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:109456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.297 [2024-07-24 21:39:13.446733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:02.297 [2024-07-24 21:39:13.446768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:110024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.297 [2024-07-24 21:39:13.446783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:02.297 [2024-07-24 21:39:13.446803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:110032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.297 [2024-07-24 21:39:13.446816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:02.297 [2024-07-24 21:39:13.446836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:110040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.297 [2024-07-24 21:39:13.446856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:02.297 [2024-07-24 21:39:13.446875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:110048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.297 [2024-07-24 21:39:13.446888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:02.297 [2024-07-24 21:39:13.446922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:110056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.297 [2024-07-24 21:39:13.446936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:02.297 [2024-07-24 21:39:13.448602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:110064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.297 [2024-07-24 21:39:13.448669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:02.297 [2024-07-24 21:39:13.448709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:110072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.297 [2024-07-24 21:39:13.448742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:02.297 [2024-07-24 21:39:13.448771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:110080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.297 [2024-07-24 21:39:13.448790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:02.297 [2024-07-24 21:39:13.448818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:110088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.297 [2024-07-24 21:39:13.448836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:02.297 [2024-07-24 21:39:13.448865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:110096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.297 [2024-07-24 21:39:13.448883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:02.297 [2024-07-24 21:39:13.448912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:109464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.297 [2024-07-24 21:39:13.448930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:02.297 [2024-07-24 21:39:13.448958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:109472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.297 [2024-07-24 21:39:13.448976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:02.298 [2024-07-24 21:39:13.449013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:109480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.298 [2024-07-24 21:39:13.449032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:02.298 [2024-07-24 21:39:13.449060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:109488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.298 [2024-07-24 21:39:13.449079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:02.298 [2024-07-24 21:39:13.449117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:109496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.298 [2024-07-24 21:39:13.449136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:02.298 [2024-07-24 21:39:13.449164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:109504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.298 [2024-07-24 21:39:13.449183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:02.298 [2024-07-24 21:39:13.449227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:109512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.298 [2024-07-24 21:39:13.449247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:02.298 [2024-07-24 21:39:13.449275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:109520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.298 [2024-07-24 21:39:13.449294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:02.298 [2024-07-24 21:39:13.449322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:109528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.298 [2024-07-24 21:39:13.449341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:02.298 [2024-07-24 21:39:13.449369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:109536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.298 [2024-07-24 21:39:13.449387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:02.298 [2024-07-24 21:39:13.449415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:109544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.298 [2024-07-24 21:39:13.449434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:02.298 [2024-07-24 21:39:13.449463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:109552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.298 [2024-07-24 21:39:13.449482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:02.298 [2024-07-24 21:39:13.449510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:109560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.298 [2024-07-24 21:39:13.449528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:02.298 [2024-07-24 21:39:13.449557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:109568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.298 [2024-07-24 21:39:13.449575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:02.298 [2024-07-24 21:39:13.449616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:109576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.298 [2024-07-24 21:39:13.449649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:02.298 [2024-07-24 21:39:13.449679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:109584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.298 [2024-07-24 21:39:13.449698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:02.298 [2024-07-24 21:39:13.449727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:110104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.298 [2024-07-24 21:39:13.449745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:02.298 [2024-07-24 21:39:13.449773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:110112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.298 [2024-07-24 21:39:13.449792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:02.298 [2024-07-24 21:39:13.449830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:110120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.298 [2024-07-24 21:39:13.449851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:02.298 [2024-07-24 21:39:13.449885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:110128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.298 [2024-07-24 21:39:13.449906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:02.298 [2024-07-24 21:39:13.449934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:110136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.298 [2024-07-24 21:39:13.449953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:02.298 [2024-07-24 21:39:13.449986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:110144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.298 [2024-07-24 21:39:13.450005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:02.298 [2024-07-24 21:39:13.450044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:110152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.298 [2024-07-24 21:39:13.450063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:02.298 [2024-07-24 21:39:13.450091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:110160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.298 [2024-07-24 21:39:13.450109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:02.298 [2024-07-24 21:39:13.450138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:110168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.298 [2024-07-24 21:39:13.450156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:02.298 [2024-07-24 21:39:13.450185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:110176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.298 [2024-07-24 21:39:13.450203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:02.298 [2024-07-24 21:39:13.450231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:110184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.298 [2024-07-24 21:39:13.450249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:02.298 [2024-07-24 21:39:13.450278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:110192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.298 [2024-07-24 21:39:13.450296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:02.298 [2024-07-24 21:39:13.450334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:110200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.298 [2024-07-24 21:39:13.450352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:02.298 [2024-07-24 21:39:13.450380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:110208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.298 [2024-07-24 21:39:13.450398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:02.298 [2024-07-24 21:39:13.450426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:110216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.298 [2024-07-24 21:39:13.450457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:02.298 [2024-07-24 21:39:13.450487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:110224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.298 [2024-07-24 21:39:13.450506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:02.298 [2024-07-24 21:39:13.450534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:109592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.298 [2024-07-24 21:39:13.450552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:02.298 [2024-07-24 21:39:13.450581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:109600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.298 [2024-07-24 21:39:13.450600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:02.298 [2024-07-24 21:39:13.450651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:109608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.298 [2024-07-24 21:39:13.450684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.298 [2024-07-24 21:39:13.450713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:109616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.298 [2024-07-24 21:39:13.450732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.298 [2024-07-24 21:39:13.450761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:109624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.298 [2024-07-24 21:39:13.450779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:02.299 [2024-07-24 21:39:13.450808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:109632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.299 [2024-07-24 21:39:13.450827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:02.299 [2024-07-24 21:39:13.450855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:109640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.299 [2024-07-24 21:39:13.450874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:02.299 [2024-07-24 21:39:13.450902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:109648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.299 [2024-07-24 21:39:13.450921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:02.299 [2024-07-24 21:39:13.450949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:109656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.299 [2024-07-24 21:39:13.450967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:02.299 [2024-07-24 21:39:13.450995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:109664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.299 [2024-07-24 21:39:13.451020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:02.299 [2024-07-24 21:39:13.451051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:109672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.299 [2024-07-24 21:39:13.451094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:02.299 [2024-07-24 21:39:13.451125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:109680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.299 [2024-07-24 21:39:13.451144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:02.299 [2024-07-24 21:39:13.451173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:109688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.299 [2024-07-24 21:39:13.451191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:02.299 [2024-07-24 21:39:13.451220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:109696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.299 [2024-07-24 21:39:13.451238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:02.299 [2024-07-24 21:39:13.451267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:109704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.299 [2024-07-24 21:39:13.451285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:02.299 [2024-07-24 21:39:13.451314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:109712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.299 [2024-07-24 21:39:13.451332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:02.299 [2024-07-24 21:39:13.451360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:110232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.299 [2024-07-24 21:39:13.451378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:02.299 [2024-07-24 21:39:13.451407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:110240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.299 [2024-07-24 21:39:13.451430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:02.299 [2024-07-24 21:39:13.451468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:110248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.299 [2024-07-24 21:39:13.451486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:02.299 [2024-07-24 21:39:13.451529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:110256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.299 [2024-07-24 21:39:13.451550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:02.299 [2024-07-24 21:39:13.451578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:110264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.299 [2024-07-24 21:39:13.451597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:02.299 [2024-07-24 21:39:13.451648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:110272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.299 [2024-07-24 21:39:13.451676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:02.299 [2024-07-24 21:39:13.451707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:110280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.299 [2024-07-24 21:39:13.451726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:02.299 [2024-07-24 21:39:13.451764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:110288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.299 [2024-07-24 21:39:13.451784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:02.299 [2024-07-24 21:39:13.451812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.299 [2024-07-24 21:39:13.451831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:02.299 [2024-07-24 21:39:13.451860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:109728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.299 [2024-07-24 21:39:13.451878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:02.299 [2024-07-24 21:39:13.451906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:109736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.299 [2024-07-24 21:39:13.451924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:02.299 [2024-07-24 21:39:13.451953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:109744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.299 [2024-07-24 21:39:13.451971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:02.299 [2024-07-24 21:39:13.452011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:109752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.299 [2024-07-24 21:39:13.452040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:02.299 [2024-07-24 21:39:13.452068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:109760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.299 [2024-07-24 21:39:13.452087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:02.299 [2024-07-24 21:39:13.452115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:109768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.299 [2024-07-24 21:39:13.452134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:02.299 [2024-07-24 21:39:13.452163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:109776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.299 [2024-07-24 21:39:13.452181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:02.299 [2024-07-24 21:39:13.452209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:109784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.299 [2024-07-24 21:39:13.452228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:02.299 [2024-07-24 21:39:13.452256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:109792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.299 [2024-07-24 21:39:13.452275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:02.299 [2024-07-24 21:39:13.452303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:109800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.299 [2024-07-24 21:39:13.452322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:02.299 [2024-07-24 21:39:13.452358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:109808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.299 [2024-07-24 21:39:13.452377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:02.299 [2024-07-24 21:39:13.452406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:109816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.299 [2024-07-24 21:39:13.452424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:02.299 [2024-07-24 21:39:13.452452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:109824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.299 [2024-07-24 21:39:13.452471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:02.299 [2024-07-24 21:39:13.452500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:109832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.299 [2024-07-24 21:39:13.452518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:02.299 [2024-07-24 21:39:13.452546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:109840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.299 [2024-07-24 21:39:13.452565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:02.300 [2024-07-24 21:39:13.452593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:109848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.300 [2024-07-24 21:39:13.452611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:02.300 [2024-07-24 21:39:13.452674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:109856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.300 [2024-07-24 21:39:13.452695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:02.300 [2024-07-24 21:39:13.452724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:109864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.300 [2024-07-24 21:39:13.452742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:02.300 [2024-07-24 21:39:13.452770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:109872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.300 [2024-07-24 21:39:13.452789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:02.300 [2024-07-24 21:39:13.452817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:109272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.300 [2024-07-24 21:39:13.452836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:02.300 [2024-07-24 21:39:13.452864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:109280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.300 [2024-07-24 21:39:13.452896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:02.300 [2024-07-24 21:39:13.452925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:109288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.300 [2024-07-24 21:39:13.452944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:02.300 [2024-07-24 21:39:13.452981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:109296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.300 [2024-07-24 21:39:13.453001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:02.300 [2024-07-24 21:39:13.453029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:109304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.300 [2024-07-24 21:39:13.453059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:02.300 [2024-07-24 21:39:13.453088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:109312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.300 [2024-07-24 21:39:13.453119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:02.300 [2024-07-24 21:39:13.453147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:109320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.300 [2024-07-24 21:39:13.453166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:02.300 [2024-07-24 21:39:13.453194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:109328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.300 [2024-07-24 21:39:13.453212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:02.300 [2024-07-24 21:39:13.453241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:109880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.300 [2024-07-24 21:39:13.453260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:02.300 [2024-07-24 21:39:13.453288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:109888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.300 [2024-07-24 21:39:13.453307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:02.300 [2024-07-24 21:39:13.453335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:109896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.300 [2024-07-24 21:39:13.453354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:02.300 [2024-07-24 21:39:13.453382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:109904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.300 [2024-07-24 21:39:13.453411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:02.300 [2024-07-24 21:39:13.453440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:109912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.300 [2024-07-24 21:39:13.453459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:02.300 [2024-07-24 21:39:13.453487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:109920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.300 [2024-07-24 21:39:13.453505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:02.300 [2024-07-24 21:39:13.453545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:109928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.300 [2024-07-24 21:39:13.453564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:02.300 [2024-07-24 21:39:13.453598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:109936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.300 [2024-07-24 21:39:13.453651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:02.300 [2024-07-24 21:39:13.453683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:109944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.300 [2024-07-24 21:39:13.453703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:02.300 [2024-07-24 21:39:13.453731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:109952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.300 [2024-07-24 21:39:13.453750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:02.300 [2024-07-24 21:39:13.453778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:109960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.300 [2024-07-24 21:39:13.453797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:02.300 [2024-07-24 21:39:13.453825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:109968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.300 [2024-07-24 21:39:13.453844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:02.300 [2024-07-24 21:39:13.453872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:109976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.300 [2024-07-24 21:39:13.453891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:02.300 [2024-07-24 21:39:13.453919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:109984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.300 [2024-07-24 21:39:13.453937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:02.300 [2024-07-24 21:39:13.453965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:109992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.300 [2024-07-24 21:39:13.453984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:02.300 [2024-07-24 21:39:13.454023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:110000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.300 [2024-07-24 21:39:13.454042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:02.300 [2024-07-24 21:39:13.454070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:110008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.300 [2024-07-24 21:39:13.454089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:02.300 [2024-07-24 21:39:13.454117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:110016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.300 [2024-07-24 21:39:13.454136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:02.300 [2024-07-24 21:39:13.454177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:109336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.300 [2024-07-24 21:39:13.454207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:02.300 [2024-07-24 21:39:13.454235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:109344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.300 [2024-07-24 21:39:13.454268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:02.300 [2024-07-24 21:39:13.454298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:109352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.300 [2024-07-24 21:39:13.454317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:02.300 [2024-07-24 21:39:13.454345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:109360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.300 [2024-07-24 21:39:13.454363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:02.300 [2024-07-24 21:39:13.454391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:109368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.300 [2024-07-24 21:39:13.454410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:02.300 [2024-07-24 21:39:13.454438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:109376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.300 [2024-07-24 21:39:13.454457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:02.300 [2024-07-24 21:39:13.454485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:109384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.300 [2024-07-24 21:39:13.454514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:02.300 [2024-07-24 21:39:13.454554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:109392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.300 [2024-07-24 21:39:13.454573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:02.301 [2024-07-24 21:39:13.454601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:109400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.301 [2024-07-24 21:39:13.454642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:02.301 [2024-07-24 21:39:13.454673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:109408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.301 [2024-07-24 21:39:13.454692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:02.301 [2024-07-24 21:39:13.454721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:109416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.301 [2024-07-24 21:39:13.454740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:02.301 [2024-07-24 21:39:13.454768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:109424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.301 [2024-07-24 21:39:13.454786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:02.301 [2024-07-24 21:39:13.454816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:109432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.301 [2024-07-24 21:39:13.454835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:02.301 [2024-07-24 21:39:13.454864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:109440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.301 [2024-07-24 21:39:13.454882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:02.301 [2024-07-24 21:39:13.454919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:109448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.301 [2024-07-24 21:39:13.454951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:02.301 [2024-07-24 21:39:13.454999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:109456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.301 [2024-07-24 21:39:13.455017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:02.301 [2024-07-24 21:39:13.455046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:110024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.301 [2024-07-24 21:39:13.455064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:02.301 [2024-07-24 21:39:13.455118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:110032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.301 [2024-07-24 21:39:13.455137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:02.301 [2024-07-24 21:39:13.455166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:110040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.301 [2024-07-24 21:39:13.455184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:02.301 [2024-07-24 21:39:13.455213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:110048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.301 [2024-07-24 21:39:13.455232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:02.301 [2024-07-24 21:39:13.457404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:110056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.301 [2024-07-24 21:39:13.457443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:02.301 [2024-07-24 21:39:13.457495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:110064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.301 [2024-07-24 21:39:13.457533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:02.301 [2024-07-24 21:39:13.457563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:110072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.301 [2024-07-24 21:39:13.457583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:02.301 [2024-07-24 21:39:13.457612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:110080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.301 [2024-07-24 21:39:13.457657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:02.301 [2024-07-24 21:39:13.457689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:110088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.301 [2024-07-24 21:39:13.457709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:02.301 [2024-07-24 21:39:13.457738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:110096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.301 [2024-07-24 21:39:13.457756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:02.301 [2024-07-24 21:39:13.457803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:109464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.301 [2024-07-24 21:39:13.457824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:02.301 [2024-07-24 21:39:13.457853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:109472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.301 [2024-07-24 21:39:13.457871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:02.301 [2024-07-24 21:39:13.457901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:109480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.301 [2024-07-24 21:39:13.457920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:02.301 [2024-07-24 21:39:13.457960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:109488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.301 [2024-07-24 21:39:13.457979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:02.301 [2024-07-24 21:39:13.458026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:109496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.301 [2024-07-24 21:39:13.458045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:02.301 [2024-07-24 21:39:13.458083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:109504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.301 [2024-07-24 21:39:13.458101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:02.301 [2024-07-24 21:39:13.458130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:109512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.301 [2024-07-24 21:39:13.458148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:02.301 [2024-07-24 21:39:13.458176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:109520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.301 [2024-07-24 21:39:13.458195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:02.301 [2024-07-24 21:39:13.458223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:109528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.301 [2024-07-24 21:39:13.458241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:02.301 [2024-07-24 21:39:13.458269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:109536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.301 [2024-07-24 21:39:13.458288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:02.301 [2024-07-24 21:39:13.458316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:109544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.301 [2024-07-24 21:39:13.458335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:02.301 [2024-07-24 21:39:13.458363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:109552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.301 [2024-07-24 21:39:13.458391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:02.301 [2024-07-24 21:39:13.458428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:109560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.301 [2024-07-24 21:39:13.458448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:02.301 [2024-07-24 21:39:13.458477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:109568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.301 [2024-07-24 21:39:13.458496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:02.301 [2024-07-24 21:39:13.458524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:109576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.301 [2024-07-24 21:39:13.458542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:02.301 [2024-07-24 21:39:13.458572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:109584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.301 [2024-07-24 21:39:13.458590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:02.301 [2024-07-24 21:39:13.458644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:110104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.301 [2024-07-24 21:39:13.458676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:02.301 [2024-07-24 21:39:13.458712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:110112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.301 [2024-07-24 21:39:13.458733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:02.301 [2024-07-24 21:39:13.458762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:110120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.301 [2024-07-24 21:39:13.458780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:02.301 [2024-07-24 21:39:13.458808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:110128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.301 [2024-07-24 21:39:13.458826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:02.301 [2024-07-24 21:39:13.458855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:110136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.302 [2024-07-24 21:39:13.458873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:02.302 [2024-07-24 21:39:13.458901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:110144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.302 [2024-07-24 21:39:13.458920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:02.302 [2024-07-24 21:39:13.458948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:110152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.302 [2024-07-24 21:39:13.458976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:02.302 [2024-07-24 21:39:13.459016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:110160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.302 [2024-07-24 21:39:13.459034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:02.302 [2024-07-24 21:39:13.459063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:110168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.302 [2024-07-24 21:39:13.459108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:02.302 [2024-07-24 21:39:13.459139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:110176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.302 [2024-07-24 21:39:13.459159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:02.302 [2024-07-24 21:39:13.459187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:110184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.302 [2024-07-24 21:39:13.459205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:02.302 [2024-07-24 21:39:13.459234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:110192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.302 [2024-07-24 21:39:13.459266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:02.302 [2024-07-24 21:39:13.459287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:110200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.302 [2024-07-24 21:39:13.459300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:02.302 [2024-07-24 21:39:13.459321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:110208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.302 [2024-07-24 21:39:13.459334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:02.302 [2024-07-24 21:39:13.459355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:110216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.302 [2024-07-24 21:39:13.459368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:02.302 [2024-07-24 21:39:13.459389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:110224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.302 [2024-07-24 21:39:13.459418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:02.302 [2024-07-24 21:39:13.459451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:109592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.302 [2024-07-24 21:39:13.459463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:02.302 [2024-07-24 21:39:13.459481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:109600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.302 [2024-07-24 21:39:13.459507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:02.302 [2024-07-24 21:39:13.459525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:109608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.302 [2024-07-24 21:39:13.459537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.302 [2024-07-24 21:39:13.459555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:109616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.302 [2024-07-24 21:39:13.459567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.302 [2024-07-24 21:39:13.459585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:109624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.302 [2024-07-24 21:39:13.459602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:02.302 [2024-07-24 21:39:13.459621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:109632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.302 [2024-07-24 21:39:13.459644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:02.302 [2024-07-24 21:39:13.459678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:109640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.302 [2024-07-24 21:39:13.459691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:02.302 [2024-07-24 21:39:13.459722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:109648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.302 [2024-07-24 21:39:13.459737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:02.302 [2024-07-24 21:39:13.459756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:109656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.302 [2024-07-24 21:39:13.459769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:02.302 [2024-07-24 21:39:13.459789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:109664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.302 [2024-07-24 21:39:13.459801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:02.302 [2024-07-24 21:39:13.459820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:109672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.302 [2024-07-24 21:39:13.459833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:02.302 [2024-07-24 21:39:13.459852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:109680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.302 [2024-07-24 21:39:13.459865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:02.302 [2024-07-24 21:39:13.459884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:109688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.302 [2024-07-24 21:39:13.459897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:02.302 [2024-07-24 21:39:13.459916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:109696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.302 [2024-07-24 21:39:13.459929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:02.302 [2024-07-24 21:39:13.459963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:109704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.302 [2024-07-24 21:39:13.459990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:02.302 [2024-07-24 21:39:13.460008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:109712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.302 [2024-07-24 21:39:13.460020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:02.302 [2024-07-24 21:39:13.460038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:110232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.302 [2024-07-24 21:39:13.460049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:02.302 [2024-07-24 21:39:13.460231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:110240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.302 [2024-07-24 21:39:13.460253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:02.302 [2024-07-24 21:39:13.460279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:110248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.302 [2024-07-24 21:39:13.460293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:02.302 [2024-07-24 21:39:13.460315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:110256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.302 [2024-07-24 21:39:13.460327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:02.302 [2024-07-24 21:39:13.460348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:110264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.302 [2024-07-24 21:39:13.460360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:02.302 [2024-07-24 21:39:13.460382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:110272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.302 [2024-07-24 21:39:13.460394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:02.302 [2024-07-24 21:39:13.460415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:110280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.302 [2024-07-24 21:39:13.460427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:02.302 [2024-07-24 21:39:13.460449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:110288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.302 [2024-07-24 21:39:13.460461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:02.302 [2024-07-24 21:39:13.460483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:109720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.302 [2024-07-24 21:39:13.460495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:02.302 [2024-07-24 21:39:13.460516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:109728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.302 [2024-07-24 21:39:13.460528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:02.302 [2024-07-24 21:39:13.460549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:109736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.303 [2024-07-24 21:39:13.460561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:02.303 [2024-07-24 21:39:13.460582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.303 [2024-07-24 21:39:13.460595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:02.303 [2024-07-24 21:39:13.460616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:109752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.303 [2024-07-24 21:39:13.460643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:02.303 [2024-07-24 21:39:13.460710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:109760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.303 [2024-07-24 21:39:13.460727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:02.303 [2024-07-24 21:39:13.460751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:109768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.303 [2024-07-24 21:39:13.460764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:02.303 [2024-07-24 21:39:13.460787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:109776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.303 [2024-07-24 21:39:13.460800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:02.303 [2024-07-24 21:39:13.460824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:109784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.303 [2024-07-24 21:39:13.460837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:02.303 [2024-07-24 21:39:13.460860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:109792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.303 [2024-07-24 21:39:13.460873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:02.303 [2024-07-24 21:39:13.460897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:109800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.303 [2024-07-24 21:39:13.460910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:02.303 [2024-07-24 21:39:13.460934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:109808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.303 [2024-07-24 21:39:13.460947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:02.303 [2024-07-24 21:39:13.460970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:109816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.303 [2024-07-24 21:39:13.460998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:02.303 [2024-07-24 21:39:13.461034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:109824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.303 [2024-07-24 21:39:13.461047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:02.303 [2024-07-24 21:39:13.461068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:109832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.303 [2024-07-24 21:39:13.461080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:02.303 [2024-07-24 21:39:13.461101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:109840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.303 [2024-07-24 21:39:13.461113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:02.303 [2024-07-24 21:39:13.461134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:109848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.303 [2024-07-24 21:39:13.461146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:02.303 [2024-07-24 21:39:13.461167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:109856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.303 [2024-07-24 21:39:13.461185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:02.303 [2024-07-24 21:39:13.461208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:109864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.303 [2024-07-24 21:39:13.461236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:02.303 [2024-07-24 21:39:13.461258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:109872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.303 [2024-07-24 21:39:13.461271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:02.303 [2024-07-24 21:39:13.461293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:109272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.303 [2024-07-24 21:39:13.461305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:02.303 [2024-07-24 21:39:13.461327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:109280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.303 [2024-07-24 21:39:13.461339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:02.303 [2024-07-24 21:39:13.461362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:109288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.303 [2024-07-24 21:39:13.461374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:02.303 [2024-07-24 21:39:13.461396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:109296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.303 [2024-07-24 21:39:13.461408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:02.303 [2024-07-24 21:39:13.461430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:109304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.303 [2024-07-24 21:39:13.461443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:02.303 [2024-07-24 21:39:13.461465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:109312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.303 [2024-07-24 21:39:13.461477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:02.303 [2024-07-24 21:39:13.461500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:109320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.303 [2024-07-24 21:39:13.461512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:02.303 [2024-07-24 21:39:13.461534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:109328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.303 [2024-07-24 21:39:13.461547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:02.303 [2024-07-24 21:39:13.461568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:109880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.303 [2024-07-24 21:39:13.461581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:02.303 [2024-07-24 21:39:13.461617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:109888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.303 [2024-07-24 21:39:13.461668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:02.303 [2024-07-24 21:39:13.461693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:109896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.303 [2024-07-24 21:39:13.461717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:02.303 [2024-07-24 21:39:13.461744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:109904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.303 [2024-07-24 21:39:13.461758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:02.303 [2024-07-24 21:39:13.461781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:109912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.303 [2024-07-24 21:39:13.461795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:02.303 [2024-07-24 21:39:13.461841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:109920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.304 [2024-07-24 21:39:13.461858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:02.304 [2024-07-24 21:39:13.461883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:109928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.304 [2024-07-24 21:39:13.461897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:02.304 [2024-07-24 21:39:13.461920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:109936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.304 [2024-07-24 21:39:13.461933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:02.304 [2024-07-24 21:39:13.461956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:109944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.304 [2024-07-24 21:39:13.461970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:02.304 [2024-07-24 21:39:13.462023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:109952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.304 [2024-07-24 21:39:13.462049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:02.304 [2024-07-24 21:39:13.462071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:109960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.304 [2024-07-24 21:39:13.462083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:02.304 [2024-07-24 21:39:13.462105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:109968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.304 [2024-07-24 21:39:13.462117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:02.304 [2024-07-24 21:39:13.462138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:109976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.304 [2024-07-24 21:39:13.462151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:02.304 [2024-07-24 21:39:13.462172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:109984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.304 [2024-07-24 21:39:13.462194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:02.304 [2024-07-24 21:39:13.462217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:109992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.304 [2024-07-24 21:39:13.462229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:02.304 [2024-07-24 21:39:13.462251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:110000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.304 [2024-07-24 21:39:13.462263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:02.304 [2024-07-24 21:39:13.462284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:110008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.304 [2024-07-24 21:39:13.462296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:02.304 [2024-07-24 21:39:13.462318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:110016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.304 [2024-07-24 21:39:13.462330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:02.304 [2024-07-24 21:39:13.462351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:109336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.304 [2024-07-24 21:39:13.462363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:02.304 [2024-07-24 21:39:13.462384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:109344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.304 [2024-07-24 21:39:13.462396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:02.304 [2024-07-24 21:39:13.462417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:109352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.304 [2024-07-24 21:39:13.462429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:02.304 [2024-07-24 21:39:13.462451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:109360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.304 [2024-07-24 21:39:13.462463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:02.304 [2024-07-24 21:39:13.462484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:109368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.304 [2024-07-24 21:39:13.462496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:02.304 [2024-07-24 21:39:13.462518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:109376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.304 [2024-07-24 21:39:13.462529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:02.304 [2024-07-24 21:39:13.462551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:109384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.304 [2024-07-24 21:39:13.462563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:02.304 [2024-07-24 21:39:13.462584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:109392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.304 [2024-07-24 21:39:13.462596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:02.304 [2024-07-24 21:39:13.462623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:109400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.304 [2024-07-24 21:39:13.462691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:02.304 [2024-07-24 21:39:13.462727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:109408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.304 [2024-07-24 21:39:13.462743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:02.304 [2024-07-24 21:39:13.462777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:109416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.304 [2024-07-24 21:39:13.462790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:02.304 [2024-07-24 21:39:13.462814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:109424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.304 [2024-07-24 21:39:13.462828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:02.304 [2024-07-24 21:39:13.462851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:109432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.304 [2024-07-24 21:39:13.462864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:02.304 [2024-07-24 21:39:13.462888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:109440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.304 [2024-07-24 21:39:13.462901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:02.304 [2024-07-24 21:39:13.462924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:109448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.304 [2024-07-24 21:39:13.462937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:02.304 [2024-07-24 21:39:13.462961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:109456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.304 [2024-07-24 21:39:13.462974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:02.304 [2024-07-24 21:39:13.462997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:110024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.304 [2024-07-24 21:39:13.463010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:02.304 [2024-07-24 21:39:13.463060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:110032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.304 [2024-07-24 21:39:13.463109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:02.304 [2024-07-24 21:39:13.463135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:110040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.304 [2024-07-24 21:39:13.463150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:02.304 [2024-07-24 21:39:13.463175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:110048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.304 [2024-07-24 21:39:13.463189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:02.304 [2024-07-24 21:39:26.768469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:18800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.304 [2024-07-24 21:39:26.768524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:02.304 [2024-07-24 21:39:26.768592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:18808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.304 [2024-07-24 21:39:26.768610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:02.304 [2024-07-24 21:39:26.768631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:18816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.304 [2024-07-24 21:39:26.768658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.304 [2024-07-24 21:39:26.768678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:18824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.304 [2024-07-24 21:39:26.768691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.304 [2024-07-24 21:39:26.768708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:18832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.304 [2024-07-24 21:39:26.768721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:02.304 [2024-07-24 21:39:26.768738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.304 [2024-07-24 21:39:26.768750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:02.305 [2024-07-24 21:39:26.768768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:18848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.305 [2024-07-24 21:39:26.768780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:02.305 [2024-07-24 21:39:26.768797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:18856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.305 [2024-07-24 21:39:26.768810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:02.305 [2024-07-24 21:39:26.768828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:18224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.305 [2024-07-24 21:39:26.768840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:02.305 [2024-07-24 21:39:26.768857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:18232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.305 [2024-07-24 21:39:26.768869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:02.305 [2024-07-24 21:39:26.768887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:18240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.305 [2024-07-24 21:39:26.768900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:02.305 [2024-07-24 21:39:26.768917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:18248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.305 [2024-07-24 21:39:26.768929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:02.305 [2024-07-24 21:39:26.768947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.305 [2024-07-24 21:39:26.768978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:02.305 [2024-07-24 21:39:26.768998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.305 [2024-07-24 21:39:26.769011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:02.305 [2024-07-24 21:39:26.769028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.305 [2024-07-24 21:39:26.769040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:02.305 [2024-07-24 21:39:26.769058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:18280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.305 [2024-07-24 21:39:26.769070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:02.305 [2024-07-24 21:39:26.769088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:18288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.305 [2024-07-24 21:39:26.769099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:02.305 [2024-07-24 21:39:26.769119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:18296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.305 [2024-07-24 21:39:26.769132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:02.305 [2024-07-24 21:39:26.769150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:18304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.305 [2024-07-24 21:39:26.769161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:02.305 [2024-07-24 21:39:26.769179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:18312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.305 [2024-07-24 21:39:26.769191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:02.305 [2024-07-24 21:39:26.769209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:18320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.305 [2024-07-24 21:39:26.769221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:02.305 [2024-07-24 21:39:26.769239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:18328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.305 [2024-07-24 21:39:26.769251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:02.305 [2024-07-24 21:39:26.769269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:18336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.305 [2024-07-24 21:39:26.769281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:02.305 [2024-07-24 21:39:26.769299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:18344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.305 [2024-07-24 21:39:26.769311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:02.305 [2024-07-24 21:39:26.769329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:18352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.305 [2024-07-24 21:39:26.769348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:02.305 [2024-07-24 21:39:26.769367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:18360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.305 [2024-07-24 21:39:26.769380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:02.305 [2024-07-24 21:39:26.769398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.305 [2024-07-24 21:39:26.769410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:02.305 [2024-07-24 21:39:26.769429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:18376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.305 [2024-07-24 21:39:26.769441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:02.305 [2024-07-24 21:39:26.769459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:18384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.305 [2024-07-24 21:39:26.769471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:02.305 [2024-07-24 21:39:26.769489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.305 [2024-07-24 21:39:26.769501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:02.305 [2024-07-24 21:39:26.769519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:18400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.305 [2024-07-24 21:39:26.769531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:02.305 [2024-07-24 21:39:26.769549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.305 [2024-07-24 21:39:26.769561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:02.305 [2024-07-24 21:39:26.769578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:18416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.305 [2024-07-24 21:39:26.769591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:02.305 [2024-07-24 21:39:26.769610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:18424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.305 [2024-07-24 21:39:26.769632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:02.305 [2024-07-24 21:39:26.769652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:18432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.305 [2024-07-24 21:39:26.769680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:02.305 [2024-07-24 21:39:26.769699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:18440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.305 [2024-07-24 21:39:26.769711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:02.305 [2024-07-24 21:39:26.769730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:18448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.305 [2024-07-24 21:39:26.769750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:02.305 [2024-07-24 21:39:26.769770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:18456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.305 [2024-07-24 21:39:26.769782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:02.305 [2024-07-24 21:39:26.769801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:18464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.305 [2024-07-24 21:39:26.769813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:02.305 [2024-07-24 21:39:26.769832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:18472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.305 [2024-07-24 21:39:26.769845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:02.305 [2024-07-24 21:39:26.769889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:18864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.305 [2024-07-24 21:39:26.769908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.305 [2024-07-24 21:39:26.769923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:18872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.305 [2024-07-24 21:39:26.769935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.305 [2024-07-24 21:39:26.769948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:18880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.305 [2024-07-24 21:39:26.769960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.305 [2024-07-24 21:39:26.769974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:18888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.305 [2024-07-24 21:39:26.769986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.305 [2024-07-24 21:39:26.769999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:18896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.306 [2024-07-24 21:39:26.770010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.306 [2024-07-24 21:39:26.770024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:18904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.306 [2024-07-24 21:39:26.770050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.306 [2024-07-24 21:39:26.770063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:18912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.306 [2024-07-24 21:39:26.770075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.306 [2024-07-24 21:39:26.770087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.306 [2024-07-24 21:39:26.770099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.306 [2024-07-24 21:39:26.770112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:18480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.306 [2024-07-24 21:39:26.770124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.306 [2024-07-24 21:39:26.770146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.306 [2024-07-24 21:39:26.770158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.306 [2024-07-24 21:39:26.770171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:18496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.306 [2024-07-24 21:39:26.770183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.306 [2024-07-24 21:39:26.770196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:18504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.306 [2024-07-24 21:39:26.770208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.306 [2024-07-24 21:39:26.770221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:18512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.306 [2024-07-24 21:39:26.770232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.306 [2024-07-24 21:39:26.770245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.306 [2024-07-24 21:39:26.770256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.306 [2024-07-24 21:39:26.770269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.306 [2024-07-24 21:39:26.770280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.306 [2024-07-24 21:39:26.770293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:18536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.306 [2024-07-24 21:39:26.770304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.306 [2024-07-24 21:39:26.770317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:18928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.306 [2024-07-24 21:39:26.770328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.306 [2024-07-24 21:39:26.770341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:18936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.306 [2024-07-24 21:39:26.770352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.306 [2024-07-24 21:39:26.770365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:18944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.306 [2024-07-24 21:39:26.770376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.306 [2024-07-24 21:39:26.770389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:18952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.306 [2024-07-24 21:39:26.770401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.306 [2024-07-24 21:39:26.770413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:18960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.306 [2024-07-24 21:39:26.770424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.306 [2024-07-24 21:39:26.770437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:18968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.306 [2024-07-24 21:39:26.770448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.306 [2024-07-24 21:39:26.770466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:18976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.306 [2024-07-24 21:39:26.770478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.306 [2024-07-24 21:39:26.770491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:18984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.306 [2024-07-24 21:39:26.770502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.306 [2024-07-24 21:39:26.770516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:18992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.306 [2024-07-24 21:39:26.770527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.306 [2024-07-24 21:39:26.770541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:19000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.306 [2024-07-24 21:39:26.770553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.306 [2024-07-24 21:39:26.770567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:19008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.306 [2024-07-24 21:39:26.770579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.306 [2024-07-24 21:39:26.770592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:19016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.306 [2024-07-24 21:39:26.770603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.306 [2024-07-24 21:39:26.770616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:19024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.306 [2024-07-24 21:39:26.770627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.306 [2024-07-24 21:39:26.770657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.306 [2024-07-24 21:39:26.770681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.306 [2024-07-24 21:39:26.770695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.306 [2024-07-24 21:39:26.770707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.306 [2024-07-24 21:39:26.770720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:19048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.306 [2024-07-24 21:39:26.770732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.306 [2024-07-24 21:39:26.770746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:18544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.306 [2024-07-24 21:39:26.770758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.306 [2024-07-24 21:39:26.770771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:18552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.306 [2024-07-24 21:39:26.770782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.306 [2024-07-24 21:39:26.770795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:18560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.306 [2024-07-24 21:39:26.770815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.306 [2024-07-24 21:39:26.770829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:18568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.306 [2024-07-24 21:39:26.770841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.306 [2024-07-24 21:39:26.770854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:18576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.306 [2024-07-24 21:39:26.770866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.306 [2024-07-24 21:39:26.770880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:18584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.306 [2024-07-24 21:39:26.770891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.306 [2024-07-24 21:39:26.770904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.306 [2024-07-24 21:39:26.770916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.306 [2024-07-24 21:39:26.770929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:18600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.306 [2024-07-24 21:39:26.770941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.306 [2024-07-24 21:39:26.770954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:18608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.306 [2024-07-24 21:39:26.770965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.306 [2024-07-24 21:39:26.770986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:18616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.306 [2024-07-24 21:39:26.770999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.306 [2024-07-24 21:39:26.771012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:18624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.306 [2024-07-24 21:39:26.771023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.306 [2024-07-24 21:39:26.771036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:18632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.306 [2024-07-24 21:39:26.771048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.306 [2024-07-24 21:39:26.771061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:18640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.307 [2024-07-24 21:39:26.771081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.307 [2024-07-24 21:39:26.771113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.307 [2024-07-24 21:39:26.771125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.307 [2024-07-24 21:39:26.771139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.307 [2024-07-24 21:39:26.771151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.307 [2024-07-24 21:39:26.771171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:18664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.307 [2024-07-24 21:39:26.771184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.307 [2024-07-24 21:39:26.771197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:19056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.307 [2024-07-24 21:39:26.771210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.307 [2024-07-24 21:39:26.771224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:19064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.307 [2024-07-24 21:39:26.771236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.307 [2024-07-24 21:39:26.771250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:19072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.307 [2024-07-24 21:39:26.771262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.307 [2024-07-24 21:39:26.771276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:19080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.307 [2024-07-24 21:39:26.771288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.307 [2024-07-24 21:39:26.771302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.307 [2024-07-24 21:39:26.771314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.307 [2024-07-24 21:39:26.771328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:19096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.307 [2024-07-24 21:39:26.771339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.307 [2024-07-24 21:39:26.771353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:19104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.307 [2024-07-24 21:39:26.771365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.307 [2024-07-24 21:39:26.771379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:19112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.307 [2024-07-24 21:39:26.771391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.307 [2024-07-24 21:39:26.771404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:19120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.307 [2024-07-24 21:39:26.771416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.307 [2024-07-24 21:39:26.771435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:19128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.307 [2024-07-24 21:39:26.771447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.307 [2024-07-24 21:39:26.771461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:19136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.307 [2024-07-24 21:39:26.771473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.307 [2024-07-24 21:39:26.771486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.307 [2024-07-24 21:39:26.771504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.307 [2024-07-24 21:39:26.771524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:19152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.307 [2024-07-24 21:39:26.771537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.307 [2024-07-24 21:39:26.771565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:19160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.307 [2024-07-24 21:39:26.771576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.307 [2024-07-24 21:39:26.771589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:19168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.307 [2024-07-24 21:39:26.771601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.307 [2024-07-24 21:39:26.771614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.307 [2024-07-24 21:39:26.771626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.307 [2024-07-24 21:39:26.771639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:18672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.307 [2024-07-24 21:39:26.771660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.307 [2024-07-24 21:39:26.771676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:18680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.307 [2024-07-24 21:39:26.771688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.307 [2024-07-24 21:39:26.771702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:18688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.307 [2024-07-24 21:39:26.771713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.307 [2024-07-24 21:39:26.771726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:18696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.307 [2024-07-24 21:39:26.771738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.307 [2024-07-24 21:39:26.771751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:18704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.307 [2024-07-24 21:39:26.771763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.307 [2024-07-24 21:39:26.771776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:18712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.307 [2024-07-24 21:39:26.771788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.307 [2024-07-24 21:39:26.771801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:18720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.307 [2024-07-24 21:39:26.771813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.307 [2024-07-24 21:39:26.771826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:18728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.307 [2024-07-24 21:39:26.771837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.307 [2024-07-24 21:39:26.771851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:19184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.307 [2024-07-24 21:39:26.771869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.307 [2024-07-24 21:39:26.771888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:19192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.307 [2024-07-24 21:39:26.771900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.307 [2024-07-24 21:39:26.771914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:19200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.307 [2024-07-24 21:39:26.771925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.307 [2024-07-24 21:39:26.771939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:19208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.307 [2024-07-24 21:39:26.771950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.307 [2024-07-24 21:39:26.771968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:19216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.307 [2024-07-24 21:39:26.771980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.307 [2024-07-24 21:39:26.771993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:19224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.307 [2024-07-24 21:39:26.772005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.307 [2024-07-24 21:39:26.772018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:19232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.307 [2024-07-24 21:39:26.772030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.307 [2024-07-24 21:39:26.772044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:19240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.307 [2024-07-24 21:39:26.772055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.307 [2024-07-24 21:39:26.772069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:18736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.307 [2024-07-24 21:39:26.772081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.307 [2024-07-24 21:39:26.772094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:18744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.307 [2024-07-24 21:39:26.772106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.307 [2024-07-24 21:39:26.772119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:18752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.307 [2024-07-24 21:39:26.772131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.307 [2024-07-24 21:39:26.772145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.307 [2024-07-24 21:39:26.772156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.308 [2024-07-24 21:39:26.772169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:18768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.308 [2024-07-24 21:39:26.772181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.308 [2024-07-24 21:39:26.772200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:18776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.308 [2024-07-24 21:39:26.772212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.308 [2024-07-24 21:39:26.772225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:18784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.308 [2024-07-24 21:39:26.772237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.308 [2024-07-24 21:39:26.772249] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90d680 is same with the state(5) to be set 00:18:02.308 [2024-07-24 21:39:26.772264] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:02.308 [2024-07-24 21:39:26.772274] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:02.308 [2024-07-24 21:39:26.772283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18792 len:8 PRP1 0x0 PRP2 0x0 00:18:02.308 [2024-07-24 21:39:26.772300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.308 [2024-07-24 21:39:26.772352] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x90d680 was disconnected and freed. reset controller. 00:18:02.308 [2024-07-24 21:39:26.773360] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:02.308 [2024-07-24 21:39:26.773430] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.308 [2024-07-24 21:39:26.773450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.308 [2024-07-24 21:39:26.773486] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88f100 (9): Bad file descriptor 00:18:02.308 [2024-07-24 21:39:26.773844] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:02.308 [2024-07-24 21:39:26.773872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88f100 with addr=10.0.0.2, port=4421 00:18:02.308 [2024-07-24 21:39:26.773886] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88f100 is same with the state(5) to be set 00:18:02.308 [2024-07-24 21:39:26.773916] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88f100 (9): Bad file descriptor 00:18:02.308 [2024-07-24 21:39:26.773942] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:02.308 [2024-07-24 21:39:26.773957] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:02.308 [2024-07-24 21:39:26.773970] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:02.308 [2024-07-24 21:39:26.773999] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:02.308 [2024-07-24 21:39:26.774013] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:02.308 [2024-07-24 21:39:36.846368] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:02.308 Received shutdown signal, test time was about 55.401622 seconds 00:18:02.308 00:18:02.308 Latency(us) 00:18:02.308 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:02.308 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:02.308 Verification LBA range: start 0x0 length 0x4000 00:18:02.308 Nvme0n1 : 55.40 7516.39 29.36 0.00 0.00 17002.02 901.12 7076934.75 00:18:02.308 =================================================================================================================== 00:18:02.308 Total : 7516.39 29.36 0.00 0.00 17002.02 901.12 7076934.75 00:18:02.308 21:39:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:02.567 21:39:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:18:02.567 21:39:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:02.567 21:39:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:18:02.567 21:39:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:02.567 21:39:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@117 -- # sync 00:18:02.567 21:39:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:02.567 21:39:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@120 -- # set +e 00:18:02.567 21:39:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:02.567 21:39:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:02.567 rmmod nvme_tcp 00:18:02.567 rmmod nvme_fabrics 00:18:02.567 rmmod nvme_keyring 00:18:02.902 21:39:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:02.902 21:39:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@124 -- # set -e 00:18:02.902 21:39:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@125 -- # return 0 00:18:02.902 21:39:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@489 -- # '[' -n 79881 ']' 00:18:02.902 21:39:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@490 -- # killprocess 79881 00:18:02.902 21:39:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@950 -- # '[' -z 79881 ']' 00:18:02.902 21:39:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # kill -0 79881 00:18:02.902 21:39:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@955 -- # uname 00:18:02.902 21:39:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:02.902 21:39:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79881 00:18:02.902 21:39:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:02.902 21:39:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:02.902 killing process with pid 79881 00:18:02.902 21:39:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79881' 00:18:02.902 21:39:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@969 -- # kill 79881 00:18:02.902 21:39:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@974 -- # wait 79881 00:18:02.902 21:39:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:02.902 21:39:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:02.902 21:39:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:02.902 21:39:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:02.902 21:39:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:02.902 21:39:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:02.902 21:39:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:02.902 21:39:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:03.163 21:39:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:03.163 00:18:03.163 real 1m1.194s 00:18:03.163 user 2m49.929s 00:18:03.163 sys 0m18.016s 00:18:03.163 21:39:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:03.163 21:39:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:03.163 ************************************ 00:18:03.163 END TEST nvmf_host_multipath 00:18:03.163 ************************************ 00:18:03.163 21:39:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@43 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:18:03.163 21:39:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:03.163 21:39:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:03.163 21:39:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.163 ************************************ 00:18:03.163 START TEST nvmf_timeout 00:18:03.163 ************************************ 00:18:03.163 21:39:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:18:03.163 * Looking for test storage... 00:18:03.163 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:03.163 21:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:03.163 21:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:18:03.163 21:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:03.163 21:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:03.163 21:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:03.163 21:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:03.163 21:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:03.163 21:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:03.163 21:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:03.163 21:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:03.163 21:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:03.163 21:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:03.163 21:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:18:03.163 21:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:18:03.163 21:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:03.163 21:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:03.163 21:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:03.163 21:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:03.163 21:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:03.163 21:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:03.163 21:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:03.163 21:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:03.163 21:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:03.163 21:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:03.163 21:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:03.163 21:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:18:03.163 21:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:03.163 21:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@47 -- # : 0 00:18:03.163 21:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:03.163 21:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:03.163 21:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:03.163 21:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:03.163 21:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:03.163 21:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:03.163 21:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:03.163 21:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:03.163 21:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:03.163 21:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:03.163 21:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:03.163 21:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:18:03.163 21:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:03.163 21:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:18:03.163 21:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:03.163 21:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:03.163 21:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:03.163 21:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:03.163 21:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:03.163 21:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:03.163 21:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:03.163 21:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:03.163 21:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:03.163 21:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:03.163 21:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:03.163 21:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:03.163 21:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:03.163 21:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:03.163 21:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:03.163 21:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:03.163 21:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:03.163 21:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:03.163 21:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:03.163 21:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:03.163 21:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:03.163 21:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:03.163 21:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:03.163 21:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:03.163 21:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:03.163 21:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:03.163 21:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:03.163 21:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:03.163 Cannot find device "nvmf_tgt_br" 00:18:03.163 21:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@155 -- # true 00:18:03.163 21:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:03.163 Cannot find device "nvmf_tgt_br2" 00:18:03.163 21:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@156 -- # true 00:18:03.163 21:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:03.163 21:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:03.163 Cannot find device "nvmf_tgt_br" 00:18:03.163 21:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@158 -- # true 00:18:03.164 21:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:03.164 Cannot find device "nvmf_tgt_br2" 00:18:03.164 21:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@159 -- # true 00:18:03.164 21:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:03.426 21:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:03.426 21:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:03.426 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:03.426 21:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:18:03.426 21:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:03.426 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:03.426 21:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:18:03.426 21:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:03.426 21:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:03.426 21:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:03.426 21:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:03.426 21:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:03.426 21:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:03.426 21:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:03.426 21:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:03.426 21:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:03.426 21:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:03.426 21:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:03.426 21:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:03.426 21:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:03.426 21:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:03.426 21:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:03.426 21:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:03.426 21:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:03.426 21:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:03.426 21:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:03.426 21:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:03.426 21:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:03.426 21:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:03.427 21:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:03.427 21:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:03.427 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:03.427 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.084 ms 00:18:03.427 00:18:03.427 --- 10.0.0.2 ping statistics --- 00:18:03.427 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:03.427 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:18:03.427 21:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:03.427 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:03.427 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.032 ms 00:18:03.427 00:18:03.427 --- 10.0.0.3 ping statistics --- 00:18:03.427 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:03.427 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:18:03.427 21:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:03.686 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:03.686 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:18:03.686 00:18:03.686 --- 10.0.0.1 ping statistics --- 00:18:03.686 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:03.686 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:18:03.686 21:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:03.686 21:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@433 -- # return 0 00:18:03.686 21:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:03.686 21:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:03.686 21:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:03.686 21:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:03.686 21:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:03.686 21:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:03.686 21:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:03.686 21:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:18:03.686 21:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:03.686 21:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:03.686 21:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:03.686 21:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@481 -- # nvmfpid=81046 00:18:03.686 21:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@482 -- # waitforlisten 81046 00:18:03.686 21:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@831 -- # '[' -z 81046 ']' 00:18:03.686 21:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:18:03.686 21:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:03.686 21:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:03.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:03.686 21:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:03.686 21:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:03.686 21:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:03.686 [2024-07-24 21:39:48.511913] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:18:03.686 [2024-07-24 21:39:48.512571] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:03.686 [2024-07-24 21:39:48.655372] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:03.945 [2024-07-24 21:39:48.771892] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:03.945 [2024-07-24 21:39:48.771956] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:03.945 [2024-07-24 21:39:48.771970] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:03.945 [2024-07-24 21:39:48.771981] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:03.945 [2024-07-24 21:39:48.771991] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:03.945 [2024-07-24 21:39:48.772135] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:03.945 [2024-07-24 21:39:48.772150] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:03.945 [2024-07-24 21:39:48.830872] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:04.511 21:39:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:04.511 21:39:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # return 0 00:18:04.511 21:39:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:04.511 21:39:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:04.511 21:39:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:04.511 21:39:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:04.511 21:39:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:04.511 21:39:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:04.770 [2024-07-24 21:39:49.746540] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:05.029 21:39:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:18:05.287 Malloc0 00:18:05.287 21:39:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:05.546 21:39:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:05.803 21:39:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:06.062 [2024-07-24 21:39:50.848354] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:06.062 21:39:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=81101 00:18:06.062 21:39:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:18:06.062 21:39:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 81101 /var/tmp/bdevperf.sock 00:18:06.062 21:39:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@831 -- # '[' -z 81101 ']' 00:18:06.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:06.062 21:39:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:06.062 21:39:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:06.062 21:39:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:06.062 21:39:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:06.062 21:39:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:06.062 [2024-07-24 21:39:50.914706] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:18:06.062 [2024-07-24 21:39:50.914803] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81101 ] 00:18:06.062 [2024-07-24 21:39:51.052378] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:06.320 [2024-07-24 21:39:51.145527] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:06.320 [2024-07-24 21:39:51.201212] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:07.254 21:39:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:07.254 21:39:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # return 0 00:18:07.254 21:39:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:18:07.255 21:39:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:18:07.513 NVMe0n1 00:18:07.513 21:39:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:07.513 21:39:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=81120 00:18:07.513 21:39:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:18:07.771 Running I/O for 10 seconds... 00:18:08.707 21:39:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:08.707 [2024-07-24 21:39:53.631233] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:08.707 [2024-07-24 21:39:53.631308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.707 [2024-07-24 21:39:53.631340] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:08.707 [2024-07-24 21:39:53.631350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.707 [2024-07-24 21:39:53.631360] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:08.707 [2024-07-24 21:39:53.631369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.707 [2024-07-24 21:39:53.631379] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:08.707 [2024-07-24 21:39:53.631388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.707 [2024-07-24 21:39:53.631397] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cd40 is same with the state(5) to be set 00:18:08.707 [2024-07-24 21:39:53.631750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.707 [2024-07-24 21:39:53.631774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.707 [2024-07-24 21:39:53.631794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:67872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.707 [2024-07-24 21:39:53.631804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.707 [2024-07-24 21:39:53.631816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:67880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.707 [2024-07-24 21:39:53.631826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.707 [2024-07-24 21:39:53.631837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:67888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.707 [2024-07-24 21:39:53.631847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.707 [2024-07-24 21:39:53.631858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:67896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.707 [2024-07-24 21:39:53.631867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.707 [2024-07-24 21:39:53.631878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:67904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.707 [2024-07-24 21:39:53.631887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.707 [2024-07-24 21:39:53.631898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:67912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.707 [2024-07-24 21:39:53.631907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.707 [2024-07-24 21:39:53.631918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:67920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.707 [2024-07-24 21:39:53.631927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.707 [2024-07-24 21:39:53.631938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:67928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.707 [2024-07-24 21:39:53.631948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.707 [2024-07-24 21:39:53.631959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:67936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.707 [2024-07-24 21:39:53.631968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.707 [2024-07-24 21:39:53.631979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:67944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.707 [2024-07-24 21:39:53.631989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.707 [2024-07-24 21:39:53.631999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:67952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.707 [2024-07-24 21:39:53.632009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.707 [2024-07-24 21:39:53.632020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:67960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.707 [2024-07-24 21:39:53.632031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.707 [2024-07-24 21:39:53.632043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:67968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.707 [2024-07-24 21:39:53.632052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.707 [2024-07-24 21:39:53.632063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:67976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.707 [2024-07-24 21:39:53.632073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.707 [2024-07-24 21:39:53.632084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:67984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.707 [2024-07-24 21:39:53.632094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.707 [2024-07-24 21:39:53.632108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:67992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.707 [2024-07-24 21:39:53.632118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.707 [2024-07-24 21:39:53.632129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:68000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.707 [2024-07-24 21:39:53.632138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.708 [2024-07-24 21:39:53.632149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:68008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.708 [2024-07-24 21:39:53.632159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.708 [2024-07-24 21:39:53.632170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:68016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.708 [2024-07-24 21:39:53.632179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.708 [2024-07-24 21:39:53.632190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:68024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.708 [2024-07-24 21:39:53.632199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.708 [2024-07-24 21:39:53.632210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:68032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.708 [2024-07-24 21:39:53.632219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.708 [2024-07-24 21:39:53.632230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:68040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.708 [2024-07-24 21:39:53.632240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.708 [2024-07-24 21:39:53.632251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:68048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.708 [2024-07-24 21:39:53.632260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.708 [2024-07-24 21:39:53.632271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:68056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.708 [2024-07-24 21:39:53.632281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.708 [2024-07-24 21:39:53.632292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:68064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.708 [2024-07-24 21:39:53.632301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.708 [2024-07-24 21:39:53.632312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:68072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.708 [2024-07-24 21:39:53.632321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.708 [2024-07-24 21:39:53.632332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:68080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.708 [2024-07-24 21:39:53.632341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.708 [2024-07-24 21:39:53.632352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:68088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.708 [2024-07-24 21:39:53.632362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.708 [2024-07-24 21:39:53.632374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:68096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.708 [2024-07-24 21:39:53.632384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.708 [2024-07-24 21:39:53.632396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:68104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.708 [2024-07-24 21:39:53.632405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.708 [2024-07-24 21:39:53.632417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:68112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.708 [2024-07-24 21:39:53.632426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.708 [2024-07-24 21:39:53.632437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:68120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.708 [2024-07-24 21:39:53.632446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.708 [2024-07-24 21:39:53.632457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:68128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.708 [2024-07-24 21:39:53.632467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.708 [2024-07-24 21:39:53.632477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:68136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.708 [2024-07-24 21:39:53.632487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.708 [2024-07-24 21:39:53.632498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:68144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.708 [2024-07-24 21:39:53.632507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.708 [2024-07-24 21:39:53.632518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:68152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.708 [2024-07-24 21:39:53.632527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.708 [2024-07-24 21:39:53.632538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:68160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.708 [2024-07-24 21:39:53.632547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.708 [2024-07-24 21:39:53.632558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:68168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.708 [2024-07-24 21:39:53.632567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.708 [2024-07-24 21:39:53.632578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:68176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.708 [2024-07-24 21:39:53.632588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.708 [2024-07-24 21:39:53.632599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:68184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.708 [2024-07-24 21:39:53.632608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.708 [2024-07-24 21:39:53.632628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:68192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.708 [2024-07-24 21:39:53.632640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.708 [2024-07-24 21:39:53.632651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:68200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.708 [2024-07-24 21:39:53.632661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.708 [2024-07-24 21:39:53.632672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:68208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.708 [2024-07-24 21:39:53.632681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.708 [2024-07-24 21:39:53.632692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:68216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.708 [2024-07-24 21:39:53.632702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.708 [2024-07-24 21:39:53.632714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:68224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.708 [2024-07-24 21:39:53.632723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.708 [2024-07-24 21:39:53.632735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:68232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.708 [2024-07-24 21:39:53.632744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.708 [2024-07-24 21:39:53.632756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:68240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.708 [2024-07-24 21:39:53.632765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.708 [2024-07-24 21:39:53.632777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:68248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.708 [2024-07-24 21:39:53.632786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.708 [2024-07-24 21:39:53.632798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:68256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.708 [2024-07-24 21:39:53.632807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.708 [2024-07-24 21:39:53.632818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:68264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.708 [2024-07-24 21:39:53.632827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.708 [2024-07-24 21:39:53.632839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:68272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.708 [2024-07-24 21:39:53.632848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.708 [2024-07-24 21:39:53.632859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:68280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.708 [2024-07-24 21:39:53.632868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.708 [2024-07-24 21:39:53.632879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:68288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.709 [2024-07-24 21:39:53.632889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.709 [2024-07-24 21:39:53.632900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:68296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.709 [2024-07-24 21:39:53.632910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.709 [2024-07-24 21:39:53.632921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:68304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.709 [2024-07-24 21:39:53.632930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.709 [2024-07-24 21:39:53.632942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:68312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.709 [2024-07-24 21:39:53.632951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.709 [2024-07-24 21:39:53.632963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:68320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.709 [2024-07-24 21:39:53.632972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.709 [2024-07-24 21:39:53.632983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:68328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.709 [2024-07-24 21:39:53.632993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.709 [2024-07-24 21:39:53.633004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:68336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.709 [2024-07-24 21:39:53.633014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.709 [2024-07-24 21:39:53.633025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:68344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.709 [2024-07-24 21:39:53.633035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.709 [2024-07-24 21:39:53.633047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:68352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.709 [2024-07-24 21:39:53.633057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.709 [2024-07-24 21:39:53.633068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:68360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.709 [2024-07-24 21:39:53.633077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.709 [2024-07-24 21:39:53.633088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:68368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.709 [2024-07-24 21:39:53.633098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.709 [2024-07-24 21:39:53.633109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:68376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.709 [2024-07-24 21:39:53.633118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.709 [2024-07-24 21:39:53.633129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:68384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.709 [2024-07-24 21:39:53.633139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.709 [2024-07-24 21:39:53.633150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:68392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.709 [2024-07-24 21:39:53.633160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.709 [2024-07-24 21:39:53.633171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:68400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.709 [2024-07-24 21:39:53.633180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.709 [2024-07-24 21:39:53.633191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:68408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.709 [2024-07-24 21:39:53.633200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.709 [2024-07-24 21:39:53.633212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:68416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.709 [2024-07-24 21:39:53.633221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.709 [2024-07-24 21:39:53.633232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:68424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.709 [2024-07-24 21:39:53.633242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.709 [2024-07-24 21:39:53.633253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:68432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.709 [2024-07-24 21:39:53.633262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.709 [2024-07-24 21:39:53.633274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:68440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.709 [2024-07-24 21:39:53.633283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.709 [2024-07-24 21:39:53.633294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:68448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.709 [2024-07-24 21:39:53.633303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.709 [2024-07-24 21:39:53.633314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:68456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.709 [2024-07-24 21:39:53.633323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.709 [2024-07-24 21:39:53.633334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:68464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.709 [2024-07-24 21:39:53.633344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.709 [2024-07-24 21:39:53.633355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:68472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.709 [2024-07-24 21:39:53.633373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.709 [2024-07-24 21:39:53.633385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:68480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.709 [2024-07-24 21:39:53.633394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.709 [2024-07-24 21:39:53.633406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:68488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.709 [2024-07-24 21:39:53.633415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.709 [2024-07-24 21:39:53.633426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:68496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.709 [2024-07-24 21:39:53.633436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.709 [2024-07-24 21:39:53.633447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:68504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.709 [2024-07-24 21:39:53.633456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.709 [2024-07-24 21:39:53.633467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:68512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.709 [2024-07-24 21:39:53.633476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.709 [2024-07-24 21:39:53.633488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:68520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.709 [2024-07-24 21:39:53.633497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.709 [2024-07-24 21:39:53.633508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:68528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.709 [2024-07-24 21:39:53.633517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.709 [2024-07-24 21:39:53.633528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:68536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.709 [2024-07-24 21:39:53.633537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.709 [2024-07-24 21:39:53.633548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:68544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.709 [2024-07-24 21:39:53.633558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.709 [2024-07-24 21:39:53.633569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:68552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.709 [2024-07-24 21:39:53.633579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.709 [2024-07-24 21:39:53.633590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:68560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.710 [2024-07-24 21:39:53.633600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.710 [2024-07-24 21:39:53.633611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:68568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.710 [2024-07-24 21:39:53.633628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.710 [2024-07-24 21:39:53.633641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:68576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.710 [2024-07-24 21:39:53.633650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.710 [2024-07-24 21:39:53.633661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:68584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.710 [2024-07-24 21:39:53.633670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.710 [2024-07-24 21:39:53.633681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:68592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.710 [2024-07-24 21:39:53.633691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.710 [2024-07-24 21:39:53.633703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:68600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.710 [2024-07-24 21:39:53.633717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.710 [2024-07-24 21:39:53.633728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:68608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.710 [2024-07-24 21:39:53.633738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.710 [2024-07-24 21:39:53.633749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:68616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.710 [2024-07-24 21:39:53.633759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.710 [2024-07-24 21:39:53.633770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:68624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.710 [2024-07-24 21:39:53.633779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.710 [2024-07-24 21:39:53.633790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:68632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.710 [2024-07-24 21:39:53.633800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.710 [2024-07-24 21:39:53.633816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:68640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.710 [2024-07-24 21:39:53.633826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.710 [2024-07-24 21:39:53.633837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:68648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.710 [2024-07-24 21:39:53.633846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.710 [2024-07-24 21:39:53.633857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:68656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.710 [2024-07-24 21:39:53.633866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.710 [2024-07-24 21:39:53.633878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:68664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.710 [2024-07-24 21:39:53.633888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.710 [2024-07-24 21:39:53.633899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:68672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.710 [2024-07-24 21:39:53.633908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.710 [2024-07-24 21:39:53.633919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:68680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.710 [2024-07-24 21:39:53.633929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.710 [2024-07-24 21:39:53.633940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:68688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.710 [2024-07-24 21:39:53.633949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.710 [2024-07-24 21:39:53.633960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:68696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.710 [2024-07-24 21:39:53.633969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.710 [2024-07-24 21:39:53.633980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:68704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.710 [2024-07-24 21:39:53.633989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.710 [2024-07-24 21:39:53.634000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:68712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.710 [2024-07-24 21:39:53.634009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.710 [2024-07-24 21:39:53.634020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:68720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.710 [2024-07-24 21:39:53.634029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.710 [2024-07-24 21:39:53.634041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:68728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.710 [2024-07-24 21:39:53.634054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.710 [2024-07-24 21:39:53.634066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:68736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.710 [2024-07-24 21:39:53.634075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.710 [2024-07-24 21:39:53.634086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:68744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.710 [2024-07-24 21:39:53.634096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.710 [2024-07-24 21:39:53.634107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:67752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.710 [2024-07-24 21:39:53.634116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.710 [2024-07-24 21:39:53.634128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:67760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.710 [2024-07-24 21:39:53.634137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.710 [2024-07-24 21:39:53.634148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:67768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.710 [2024-07-24 21:39:53.634157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.710 [2024-07-24 21:39:53.634168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:67776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.710 [2024-07-24 21:39:53.634178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.710 [2024-07-24 21:39:53.634189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:67784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.710 [2024-07-24 21:39:53.634198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.710 [2024-07-24 21:39:53.634209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:67792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.710 [2024-07-24 21:39:53.634219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.710 [2024-07-24 21:39:53.634230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:67800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.710 [2024-07-24 21:39:53.634239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.710 [2024-07-24 21:39:53.634250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:67808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.710 [2024-07-24 21:39:53.634260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.710 [2024-07-24 21:39:53.634271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:67816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.710 [2024-07-24 21:39:53.634281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.710 [2024-07-24 21:39:53.634292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:67824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.710 [2024-07-24 21:39:53.634301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.711 [2024-07-24 21:39:53.634313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:67832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.711 [2024-07-24 21:39:53.634322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.711 [2024-07-24 21:39:53.634332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:67840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.711 [2024-07-24 21:39:53.634342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.711 [2024-07-24 21:39:53.634352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:67848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.711 [2024-07-24 21:39:53.634362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.711 [2024-07-24 21:39:53.634373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:67856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.711 [2024-07-24 21:39:53.634387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.711 [2024-07-24 21:39:53.634398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:67864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.711 [2024-07-24 21:39:53.634407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.711 [2024-07-24 21:39:53.634418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:68752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.711 [2024-07-24 21:39:53.634428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.711 [2024-07-24 21:39:53.634438] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180d1b0 is same with the state(5) to be set 00:18:08.711 [2024-07-24 21:39:53.634449] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:08.711 [2024-07-24 21:39:53.634457] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:08.711 [2024-07-24 21:39:53.634465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68760 len:8 PRP1 0x0 PRP2 0x0 00:18:08.711 [2024-07-24 21:39:53.634474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.711 [2024-07-24 21:39:53.634526] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x180d1b0 was disconnected and freed. reset controller. 00:18:08.711 [2024-07-24 21:39:53.634767] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:08.711 [2024-07-24 21:39:53.634796] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x179cd40 (9): Bad file descriptor 00:18:08.711 [2024-07-24 21:39:53.634884] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:08.711 [2024-07-24 21:39:53.634905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x179cd40 with addr=10.0.0.2, port=4420 00:18:08.711 [2024-07-24 21:39:53.634915] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cd40 is same with the state(5) to be set 00:18:08.711 [2024-07-24 21:39:53.634933] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x179cd40 (9): Bad file descriptor 00:18:08.711 [2024-07-24 21:39:53.634948] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:08.711 [2024-07-24 21:39:53.634957] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:08.711 [2024-07-24 21:39:53.634968] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:08.711 [2024-07-24 21:39:53.634987] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:08.711 [2024-07-24 21:39:53.634997] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:08.711 21:39:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:18:11.239 [2024-07-24 21:39:55.635314] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:11.239 [2024-07-24 21:39:55.635368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x179cd40 with addr=10.0.0.2, port=4420 00:18:11.239 [2024-07-24 21:39:55.635384] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cd40 is same with the state(5) to be set 00:18:11.239 [2024-07-24 21:39:55.635409] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x179cd40 (9): Bad file descriptor 00:18:11.239 [2024-07-24 21:39:55.635428] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:11.239 [2024-07-24 21:39:55.635453] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:11.239 [2024-07-24 21:39:55.635463] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:11.239 [2024-07-24 21:39:55.635489] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:11.239 [2024-07-24 21:39:55.635500] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:11.239 21:39:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:18:11.239 21:39:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:11.239 21:39:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:18:11.239 21:39:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:18:11.239 21:39:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:18:11.239 21:39:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:18:11.239 21:39:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:18:11.239 21:39:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:18:11.239 21:39:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:18:12.640 [2024-07-24 21:39:57.635821] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:12.640 [2024-07-24 21:39:57.635910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x179cd40 with addr=10.0.0.2, port=4420 00:18:12.640 [2024-07-24 21:39:57.635928] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cd40 is same with the state(5) to be set 00:18:12.640 [2024-07-24 21:39:57.635954] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x179cd40 (9): Bad file descriptor 00:18:12.640 [2024-07-24 21:39:57.635972] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:12.640 [2024-07-24 21:39:57.635982] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:12.640 [2024-07-24 21:39:57.635992] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:12.640 [2024-07-24 21:39:57.636020] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:12.640 [2024-07-24 21:39:57.636031] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:15.165 [2024-07-24 21:39:59.636138] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:15.165 [2024-07-24 21:39:59.636185] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:15.165 [2024-07-24 21:39:59.636213] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:15.165 [2024-07-24 21:39:59.636223] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:18:15.165 [2024-07-24 21:39:59.636259] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:15.731 00:18:15.731 Latency(us) 00:18:15.731 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:15.731 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:15.731 Verification LBA range: start 0x0 length 0x4000 00:18:15.731 NVMe0n1 : 8.12 1042.82 4.07 15.76 0.00 120753.25 3589.59 7015926.69 00:18:15.731 =================================================================================================================== 00:18:15.731 Total : 1042.82 4.07 15.76 0.00 120753.25 3589.59 7015926.69 00:18:15.731 0 00:18:16.297 21:40:01 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:18:16.297 21:40:01 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:16.297 21:40:01 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:18:16.555 21:40:01 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:18:16.555 21:40:01 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:18:16.555 21:40:01 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:18:16.555 21:40:01 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:18:16.812 21:40:01 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:18:16.812 21:40:01 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@65 -- # wait 81120 00:18:16.812 21:40:01 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 81101 00:18:16.812 21:40:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@950 -- # '[' -z 81101 ']' 00:18:16.812 21:40:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # kill -0 81101 00:18:16.812 21:40:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # uname 00:18:16.812 21:40:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:16.813 21:40:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81101 00:18:16.813 killing process with pid 81101 00:18:16.813 Received shutdown signal, test time was about 9.150090 seconds 00:18:16.813 00:18:16.813 Latency(us) 00:18:16.813 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:16.813 =================================================================================================================== 00:18:16.813 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:16.813 21:40:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:18:16.813 21:40:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:18:16.813 21:40:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81101' 00:18:16.813 21:40:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@969 -- # kill 81101 00:18:16.813 21:40:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@974 -- # wait 81101 00:18:17.070 21:40:01 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:17.328 [2024-07-24 21:40:02.124828] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:17.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:17.328 21:40:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=81245 00:18:17.328 21:40:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:18:17.328 21:40:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 81245 /var/tmp/bdevperf.sock 00:18:17.328 21:40:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@831 -- # '[' -z 81245 ']' 00:18:17.328 21:40:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:17.328 21:40:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:17.328 21:40:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:17.328 21:40:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:17.328 21:40:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:17.328 [2024-07-24 21:40:02.186826] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:18:17.328 [2024-07-24 21:40:02.187072] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81245 ] 00:18:17.328 [2024-07-24 21:40:02.317553] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:17.586 [2024-07-24 21:40:02.404677] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:17.586 [2024-07-24 21:40:02.458563] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:18.151 21:40:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:18.151 21:40:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # return 0 00:18:18.151 21:40:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:18:18.410 21:40:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:18:18.668 NVMe0n1 00:18:18.668 21:40:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:18.668 21:40:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=81263 00:18:18.668 21:40:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:18:18.926 Running I/O for 10 seconds... 00:18:19.861 21:40:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:20.122 [2024-07-24 21:40:04.866442] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:20.122 [2024-07-24 21:40:04.866494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.122 [2024-07-24 21:40:04.866526] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:20.122 [2024-07-24 21:40:04.866535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.122 [2024-07-24 21:40:04.866545] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:20.122 [2024-07-24 21:40:04.866554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.122 [2024-07-24 21:40:04.866579] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:20.122 [2024-07-24 21:40:04.866588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.122 [2024-07-24 21:40:04.866597] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2fd40 is same with the state(5) to be set 00:18:20.122 [2024-07-24 21:40:04.866899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:63648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.122 [2024-07-24 21:40:04.866932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.122 [2024-07-24 21:40:04.866951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:63776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.122 [2024-07-24 21:40:04.866962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.122 [2024-07-24 21:40:04.866974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:63784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.122 [2024-07-24 21:40:04.866984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.122 [2024-07-24 21:40:04.866995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:63792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.122 [2024-07-24 21:40:04.867004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.122 [2024-07-24 21:40:04.867014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:63800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.122 [2024-07-24 21:40:04.867023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.122 [2024-07-24 21:40:04.867034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:63808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.122 [2024-07-24 21:40:04.867058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.122 [2024-07-24 21:40:04.867084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:63816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.122 [2024-07-24 21:40:04.867104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.122 [2024-07-24 21:40:04.867119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:63824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.122 [2024-07-24 21:40:04.867128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.122 [2024-07-24 21:40:04.867139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:63832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.122 [2024-07-24 21:40:04.867149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.122 [2024-07-24 21:40:04.867160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:63840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.122 [2024-07-24 21:40:04.867169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.122 [2024-07-24 21:40:04.867182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:63848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.122 [2024-07-24 21:40:04.867191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.122 [2024-07-24 21:40:04.867202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:63856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.122 [2024-07-24 21:40:04.867211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.122 [2024-07-24 21:40:04.867222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:63864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.122 [2024-07-24 21:40:04.867233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.122 [2024-07-24 21:40:04.867244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:63872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.122 [2024-07-24 21:40:04.867253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.122 [2024-07-24 21:40:04.867264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:63880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.122 [2024-07-24 21:40:04.867273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.122 [2024-07-24 21:40:04.867284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:63888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.122 [2024-07-24 21:40:04.867293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.122 [2024-07-24 21:40:04.867304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:63896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.122 [2024-07-24 21:40:04.867313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.122 [2024-07-24 21:40:04.867324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:63904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.122 [2024-07-24 21:40:04.867333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.122 [2024-07-24 21:40:04.867344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:63912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.122 [2024-07-24 21:40:04.867353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.122 [2024-07-24 21:40:04.867364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.122 [2024-07-24 21:40:04.867373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.122 [2024-07-24 21:40:04.867385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:63928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.122 [2024-07-24 21:40:04.867394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.122 [2024-07-24 21:40:04.867405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:63936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.122 [2024-07-24 21:40:04.867414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.122 [2024-07-24 21:40:04.867425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:63944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.123 [2024-07-24 21:40:04.867434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.123 [2024-07-24 21:40:04.867446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:63952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.123 [2024-07-24 21:40:04.867462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.123 [2024-07-24 21:40:04.867473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:63960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.123 [2024-07-24 21:40:04.867482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.123 [2024-07-24 21:40:04.867493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:63968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.123 [2024-07-24 21:40:04.867502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.123 [2024-07-24 21:40:04.867513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:63976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.123 [2024-07-24 21:40:04.867522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.123 [2024-07-24 21:40:04.867533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:63984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.123 [2024-07-24 21:40:04.867542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.123 [2024-07-24 21:40:04.867553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:63992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.123 [2024-07-24 21:40:04.867563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.123 [2024-07-24 21:40:04.867574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:64000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.123 [2024-07-24 21:40:04.867583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.123 [2024-07-24 21:40:04.867593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:64008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.123 [2024-07-24 21:40:04.867602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.123 [2024-07-24 21:40:04.867613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:64016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.123 [2024-07-24 21:40:04.867634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.123 [2024-07-24 21:40:04.867647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:64024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.123 [2024-07-24 21:40:04.867657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.123 [2024-07-24 21:40:04.867667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:64032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.123 [2024-07-24 21:40:04.867677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.123 [2024-07-24 21:40:04.867688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:64040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.123 [2024-07-24 21:40:04.867697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.123 [2024-07-24 21:40:04.867708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:64048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.123 [2024-07-24 21:40:04.867717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.123 [2024-07-24 21:40:04.867729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:64056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.123 [2024-07-24 21:40:04.867738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.123 [2024-07-24 21:40:04.867749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:64064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.123 [2024-07-24 21:40:04.867758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.123 [2024-07-24 21:40:04.867769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:64072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.123 [2024-07-24 21:40:04.867778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.123 [2024-07-24 21:40:04.867789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:64080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.123 [2024-07-24 21:40:04.867798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.123 [2024-07-24 21:40:04.867809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:64088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.123 [2024-07-24 21:40:04.867818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.123 [2024-07-24 21:40:04.867830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:64096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.123 [2024-07-24 21:40:04.867839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.123 [2024-07-24 21:40:04.867850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:64104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.123 [2024-07-24 21:40:04.867859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.123 [2024-07-24 21:40:04.867870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:64112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.123 [2024-07-24 21:40:04.867879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.123 [2024-07-24 21:40:04.867890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:64120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.123 [2024-07-24 21:40:04.867900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.123 [2024-07-24 21:40:04.867911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:64128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.123 [2024-07-24 21:40:04.867920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.123 [2024-07-24 21:40:04.867931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:64136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.123 [2024-07-24 21:40:04.867940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.123 [2024-07-24 21:40:04.867952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:64144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.123 [2024-07-24 21:40:04.867961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.123 [2024-07-24 21:40:04.867972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:64152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.123 [2024-07-24 21:40:04.867981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.123 [2024-07-24 21:40:04.867993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:64160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.123 [2024-07-24 21:40:04.868002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.123 [2024-07-24 21:40:04.868013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:64168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.123 [2024-07-24 21:40:04.868022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.123 [2024-07-24 21:40:04.868033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:64176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.123 [2024-07-24 21:40:04.868042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.123 [2024-07-24 21:40:04.868053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:64184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.123 [2024-07-24 21:40:04.868063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.123 [2024-07-24 21:40:04.868074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:64192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.123 [2024-07-24 21:40:04.868083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.123 [2024-07-24 21:40:04.868094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:64200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.123 [2024-07-24 21:40:04.868103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.123 [2024-07-24 21:40:04.868114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:64208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.123 [2024-07-24 21:40:04.868123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.123 [2024-07-24 21:40:04.868134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:64216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.123 [2024-07-24 21:40:04.868143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.123 [2024-07-24 21:40:04.868154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:64224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.123 [2024-07-24 21:40:04.868163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.123 [2024-07-24 21:40:04.868174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:64232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.123 [2024-07-24 21:40:04.868183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.123 [2024-07-24 21:40:04.868194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:64240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.123 [2024-07-24 21:40:04.868203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.123 [2024-07-24 21:40:04.868214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:64248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.123 [2024-07-24 21:40:04.868224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.123 [2024-07-24 21:40:04.868235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:64256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.124 [2024-07-24 21:40:04.868244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.124 [2024-07-24 21:40:04.868256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:64264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.124 [2024-07-24 21:40:04.868265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.124 [2024-07-24 21:40:04.868276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:64272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.124 [2024-07-24 21:40:04.868285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.124 [2024-07-24 21:40:04.868296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:64280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.124 [2024-07-24 21:40:04.868305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.124 [2024-07-24 21:40:04.868316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:64288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.124 [2024-07-24 21:40:04.868325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.124 [2024-07-24 21:40:04.868336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:64296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.124 [2024-07-24 21:40:04.868346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.124 [2024-07-24 21:40:04.868357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:64304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.124 [2024-07-24 21:40:04.868375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.124 [2024-07-24 21:40:04.868387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:64312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.124 [2024-07-24 21:40:04.868396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.124 [2024-07-24 21:40:04.868407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:64320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.124 [2024-07-24 21:40:04.868416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.124 [2024-07-24 21:40:04.868427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:64328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.124 [2024-07-24 21:40:04.868437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.124 [2024-07-24 21:40:04.868447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:64336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.124 [2024-07-24 21:40:04.868457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.124 [2024-07-24 21:40:04.868468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:64344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.124 [2024-07-24 21:40:04.868477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.124 [2024-07-24 21:40:04.868489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:64352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.124 [2024-07-24 21:40:04.868498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.124 [2024-07-24 21:40:04.868509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:64360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.124 [2024-07-24 21:40:04.868518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.124 [2024-07-24 21:40:04.868530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:64368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.124 [2024-07-24 21:40:04.868539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.124 [2024-07-24 21:40:04.868550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:64376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.124 [2024-07-24 21:40:04.868559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.124 [2024-07-24 21:40:04.868570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:64384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.124 [2024-07-24 21:40:04.868579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.124 [2024-07-24 21:40:04.868600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:64392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.124 [2024-07-24 21:40:04.868609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.124 [2024-07-24 21:40:04.868629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:64400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.124 [2024-07-24 21:40:04.868640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.124 [2024-07-24 21:40:04.868651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:64408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.124 [2024-07-24 21:40:04.868660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.124 [2024-07-24 21:40:04.868671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:64416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.124 [2024-07-24 21:40:04.868681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.124 [2024-07-24 21:40:04.868692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:64424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.124 [2024-07-24 21:40:04.868701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.124 [2024-07-24 21:40:04.868712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:64432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.124 [2024-07-24 21:40:04.868726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.124 [2024-07-24 21:40:04.868737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:64440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.124 [2024-07-24 21:40:04.868747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.124 [2024-07-24 21:40:04.868758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:64448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.124 [2024-07-24 21:40:04.868767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.124 [2024-07-24 21:40:04.868778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:64456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.124 [2024-07-24 21:40:04.868787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.124 [2024-07-24 21:40:04.868798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:64464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.124 [2024-07-24 21:40:04.868807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.124 [2024-07-24 21:40:04.868819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:64472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.124 [2024-07-24 21:40:04.868828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.124 [2024-07-24 21:40:04.868839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:64480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.124 [2024-07-24 21:40:04.868848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.124 [2024-07-24 21:40:04.868859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:64488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.124 [2024-07-24 21:40:04.868868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.124 [2024-07-24 21:40:04.868880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:64496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.124 [2024-07-24 21:40:04.868889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.124 [2024-07-24 21:40:04.868899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:64504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.124 [2024-07-24 21:40:04.868908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.124 [2024-07-24 21:40:04.868919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:64512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.124 [2024-07-24 21:40:04.868929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.124 [2024-07-24 21:40:04.868939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:64520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.124 [2024-07-24 21:40:04.868949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.124 [2024-07-24 21:40:04.868960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:64528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.124 [2024-07-24 21:40:04.868969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.124 [2024-07-24 21:40:04.868980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:64536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.124 [2024-07-24 21:40:04.868990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.124 [2024-07-24 21:40:04.869001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:64544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.124 [2024-07-24 21:40:04.869009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.124 [2024-07-24 21:40:04.869020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:64552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.124 [2024-07-24 21:40:04.869029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.124 [2024-07-24 21:40:04.869040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:64560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.124 [2024-07-24 21:40:04.869054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.124 [2024-07-24 21:40:04.869065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.124 [2024-07-24 21:40:04.869074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.125 [2024-07-24 21:40:04.869086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:64576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.125 [2024-07-24 21:40:04.869095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.125 [2024-07-24 21:40:04.869106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:64584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.125 [2024-07-24 21:40:04.869115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.125 [2024-07-24 21:40:04.869126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:64592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.125 [2024-07-24 21:40:04.869135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.125 [2024-07-24 21:40:04.869145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:64600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.125 [2024-07-24 21:40:04.869154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.125 [2024-07-24 21:40:04.869165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:64608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.125 [2024-07-24 21:40:04.869175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.125 [2024-07-24 21:40:04.869186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:64616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.125 [2024-07-24 21:40:04.869195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.125 [2024-07-24 21:40:04.869206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:64624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.125 [2024-07-24 21:40:04.869215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.125 [2024-07-24 21:40:04.869226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:64632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.125 [2024-07-24 21:40:04.869236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.125 [2024-07-24 21:40:04.869247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:64640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.125 [2024-07-24 21:40:04.869256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.125 [2024-07-24 21:40:04.869267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:64648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.125 [2024-07-24 21:40:04.869276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.125 [2024-07-24 21:40:04.869287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:63656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.125 [2024-07-24 21:40:04.869296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.125 [2024-07-24 21:40:04.869307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:63664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.125 [2024-07-24 21:40:04.869316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.125 [2024-07-24 21:40:04.869327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:63672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.125 [2024-07-24 21:40:04.869337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.125 [2024-07-24 21:40:04.869348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:63680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.125 [2024-07-24 21:40:04.869357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.125 [2024-07-24 21:40:04.869368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:63688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.125 [2024-07-24 21:40:04.869381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.125 [2024-07-24 21:40:04.869392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:63696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.125 [2024-07-24 21:40:04.869401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.125 [2024-07-24 21:40:04.869413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:63704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.125 [2024-07-24 21:40:04.869421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.125 [2024-07-24 21:40:04.869433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:63712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.125 [2024-07-24 21:40:04.869442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.125 [2024-07-24 21:40:04.869452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:63720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.125 [2024-07-24 21:40:04.869461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.125 [2024-07-24 21:40:04.869473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:63728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.125 [2024-07-24 21:40:04.869482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.125 [2024-07-24 21:40:04.869493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:63736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.125 [2024-07-24 21:40:04.869502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.125 [2024-07-24 21:40:04.869513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:63744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.125 [2024-07-24 21:40:04.869522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.125 [2024-07-24 21:40:04.869533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:63752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.125 [2024-07-24 21:40:04.869542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.125 [2024-07-24 21:40:04.869553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:63760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.125 [2024-07-24 21:40:04.869572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.125 [2024-07-24 21:40:04.869585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:63768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.125 [2024-07-24 21:40:04.869594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.125 [2024-07-24 21:40:04.869606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:64656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.125 [2024-07-24 21:40:04.869615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.125 [2024-07-24 21:40:04.869635] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da01b0 is same with the state(5) to be set 00:18:20.125 [2024-07-24 21:40:04.869648] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:20.125 [2024-07-24 21:40:04.869656] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:20.125 [2024-07-24 21:40:04.869664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64664 len:8 PRP1 0x0 PRP2 0x0 00:18:20.125 [2024-07-24 21:40:04.869673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.125 [2024-07-24 21:40:04.869725] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1da01b0 was disconnected and freed. reset controller. 00:18:20.125 [2024-07-24 21:40:04.869958] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:20.125 [2024-07-24 21:40:04.869987] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2fd40 (9): Bad file descriptor 00:18:20.125 [2024-07-24 21:40:04.870083] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:20.125 [2024-07-24 21:40:04.870105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2fd40 with addr=10.0.0.2, port=4420 00:18:20.125 [2024-07-24 21:40:04.870116] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2fd40 is same with the state(5) to be set 00:18:20.125 [2024-07-24 21:40:04.870134] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2fd40 (9): Bad file descriptor 00:18:20.125 [2024-07-24 21:40:04.870150] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:20.125 [2024-07-24 21:40:04.870159] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:20.125 [2024-07-24 21:40:04.870169] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:20.125 [2024-07-24 21:40:04.870188] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:20.125 [2024-07-24 21:40:04.870198] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:20.125 21:40:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:18:21.059 [2024-07-24 21:40:05.870312] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:21.060 [2024-07-24 21:40:05.870377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2fd40 with addr=10.0.0.2, port=4420 00:18:21.060 [2024-07-24 21:40:05.870392] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2fd40 is same with the state(5) to be set 00:18:21.060 [2024-07-24 21:40:05.870414] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2fd40 (9): Bad file descriptor 00:18:21.060 [2024-07-24 21:40:05.870431] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:21.060 [2024-07-24 21:40:05.870440] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:21.060 [2024-07-24 21:40:05.870450] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:21.060 [2024-07-24 21:40:05.870498] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:21.060 [2024-07-24 21:40:05.870508] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:21.060 21:40:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:21.377 [2024-07-24 21:40:06.083572] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:21.377 21:40:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@92 -- # wait 81263 00:18:21.957 [2024-07-24 21:40:06.885008] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:30.068 00:18:30.068 Latency(us) 00:18:30.068 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:30.068 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:30.068 Verification LBA range: start 0x0 length 0x4000 00:18:30.068 NVMe0n1 : 10.01 6579.94 25.70 0.00 0.00 19414.28 1660.74 3019898.88 00:18:30.068 =================================================================================================================== 00:18:30.068 Total : 6579.94 25.70 0.00 0.00 19414.28 1660.74 3019898.88 00:18:30.068 0 00:18:30.068 21:40:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=81372 00:18:30.068 21:40:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:30.068 21:40:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:18:30.068 Running I/O for 10 seconds... 00:18:30.068 21:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:30.329 [2024-07-24 21:40:15.072694] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf26ff0 is same with the state(5) to be set 00:18:30.329 [2024-07-24 21:40:15.072745] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf26ff0 is same with the state(5) to be set 00:18:30.329 [2024-07-24 21:40:15.072758] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf26ff0 is same with the state(5) to be set 00:18:30.329 [2024-07-24 21:40:15.073125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:80160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.329 [2024-07-24 21:40:15.073154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.329 [2024-07-24 21:40:15.073175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:80168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.329 [2024-07-24 21:40:15.073186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.329 [2024-07-24 21:40:15.073198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:80176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.329 [2024-07-24 21:40:15.073208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.329 [2024-07-24 21:40:15.073219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:80184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.329 [2024-07-24 21:40:15.073229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.329 [2024-07-24 21:40:15.073240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:80512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.329 [2024-07-24 21:40:15.073250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.329 [2024-07-24 21:40:15.073261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:80520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.330 [2024-07-24 21:40:15.073271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.330 [2024-07-24 21:40:15.073282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:80528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.330 [2024-07-24 21:40:15.073291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.330 [2024-07-24 21:40:15.073302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:80536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.330 [2024-07-24 21:40:15.073311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.330 [2024-07-24 21:40:15.073322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:80544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.330 [2024-07-24 21:40:15.073332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.330 [2024-07-24 21:40:15.073343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:80552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.330 [2024-07-24 21:40:15.073352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.330 [2024-07-24 21:40:15.073363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:80560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.330 [2024-07-24 21:40:15.073373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.330 [2024-07-24 21:40:15.073384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:80568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.330 [2024-07-24 21:40:15.073401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.330 [2024-07-24 21:40:15.073413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:80576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.330 [2024-07-24 21:40:15.073422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.330 [2024-07-24 21:40:15.073433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:80584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.330 [2024-07-24 21:40:15.073442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.330 [2024-07-24 21:40:15.073454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:80592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.330 [2024-07-24 21:40:15.073463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.330 [2024-07-24 21:40:15.073474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:80600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.330 [2024-07-24 21:40:15.073483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.330 [2024-07-24 21:40:15.073495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:80608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.330 [2024-07-24 21:40:15.073506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.330 [2024-07-24 21:40:15.073518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:80616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.330 [2024-07-24 21:40:15.073528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.330 [2024-07-24 21:40:15.073539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:80624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.330 [2024-07-24 21:40:15.073550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.330 [2024-07-24 21:40:15.073561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:80632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.330 [2024-07-24 21:40:15.073575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.330 [2024-07-24 21:40:15.073586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:80192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.330 [2024-07-24 21:40:15.073595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.330 [2024-07-24 21:40:15.073606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:80200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.330 [2024-07-24 21:40:15.073615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.330 [2024-07-24 21:40:15.073641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:80208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.330 [2024-07-24 21:40:15.073651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.330 [2024-07-24 21:40:15.073662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:80216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.330 [2024-07-24 21:40:15.073671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.330 [2024-07-24 21:40:15.073683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:80224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.330 [2024-07-24 21:40:15.073693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.330 [2024-07-24 21:40:15.073704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:80232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.330 [2024-07-24 21:40:15.073713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.330 [2024-07-24 21:40:15.073724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:80240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.330 [2024-07-24 21:40:15.073733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.330 [2024-07-24 21:40:15.073744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:80248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.330 [2024-07-24 21:40:15.073753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.330 [2024-07-24 21:40:15.073764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:80256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.330 [2024-07-24 21:40:15.073773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.330 [2024-07-24 21:40:15.073784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:80264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.330 [2024-07-24 21:40:15.073793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.330 [2024-07-24 21:40:15.073804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:80272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.330 [2024-07-24 21:40:15.073812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.330 [2024-07-24 21:40:15.073823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:80280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.330 [2024-07-24 21:40:15.073832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.330 [2024-07-24 21:40:15.073844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:80288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.330 [2024-07-24 21:40:15.073853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.330 [2024-07-24 21:40:15.073864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:80296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.330 [2024-07-24 21:40:15.073873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.330 [2024-07-24 21:40:15.073884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:80304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.330 [2024-07-24 21:40:15.073893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.330 [2024-07-24 21:40:15.073904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:80312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.330 [2024-07-24 21:40:15.073913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.330 [2024-07-24 21:40:15.073924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:80640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.330 [2024-07-24 21:40:15.073933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.330 [2024-07-24 21:40:15.073945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:80648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.330 [2024-07-24 21:40:15.073955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.330 [2024-07-24 21:40:15.073966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:80656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.330 [2024-07-24 21:40:15.073975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.330 [2024-07-24 21:40:15.073986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:80664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.330 [2024-07-24 21:40:15.073995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.330 [2024-07-24 21:40:15.074007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:80672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.330 [2024-07-24 21:40:15.074016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.330 [2024-07-24 21:40:15.074028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:80680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.330 [2024-07-24 21:40:15.074037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.330 [2024-07-24 21:40:15.074048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:80688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.330 [2024-07-24 21:40:15.074057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.330 [2024-07-24 21:40:15.074068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:80696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.330 [2024-07-24 21:40:15.074077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.330 [2024-07-24 21:40:15.074089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:80320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.330 [2024-07-24 21:40:15.074098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.331 [2024-07-24 21:40:15.074109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:80328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.331 [2024-07-24 21:40:15.074118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.331 [2024-07-24 21:40:15.074129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:80336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.331 [2024-07-24 21:40:15.074138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.331 [2024-07-24 21:40:15.074158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:80344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.331 [2024-07-24 21:40:15.074167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.331 [2024-07-24 21:40:15.074179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:80352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.331 [2024-07-24 21:40:15.074188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.331 [2024-07-24 21:40:15.074199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:80360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.331 [2024-07-24 21:40:15.074208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.331 [2024-07-24 21:40:15.074219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:80368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.331 [2024-07-24 21:40:15.074228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.331 [2024-07-24 21:40:15.074239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.331 [2024-07-24 21:40:15.074249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.331 [2024-07-24 21:40:15.074260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:80704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.331 [2024-07-24 21:40:15.074269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.331 [2024-07-24 21:40:15.074280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:80712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.331 [2024-07-24 21:40:15.074289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.331 [2024-07-24 21:40:15.074300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:80720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.331 [2024-07-24 21:40:15.074309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.331 [2024-07-24 21:40:15.074320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:80728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.331 [2024-07-24 21:40:15.074330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.331 [2024-07-24 21:40:15.074341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:80736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.331 [2024-07-24 21:40:15.074351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.331 [2024-07-24 21:40:15.074362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:80744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.331 [2024-07-24 21:40:15.074371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.331 [2024-07-24 21:40:15.074382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:80752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.331 [2024-07-24 21:40:15.074391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.331 [2024-07-24 21:40:15.074401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:80760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.331 [2024-07-24 21:40:15.074410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.331 [2024-07-24 21:40:15.074421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:80768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.331 [2024-07-24 21:40:15.074430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.331 [2024-07-24 21:40:15.074441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:80776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.331 [2024-07-24 21:40:15.074450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.331 [2024-07-24 21:40:15.074460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:80784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.331 [2024-07-24 21:40:15.074470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.331 [2024-07-24 21:40:15.074480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:80792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.331 [2024-07-24 21:40:15.074489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.331 [2024-07-24 21:40:15.074501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:80800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.331 [2024-07-24 21:40:15.074510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.331 [2024-07-24 21:40:15.074521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:80808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.331 [2024-07-24 21:40:15.074530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.331 [2024-07-24 21:40:15.074541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:80816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.331 [2024-07-24 21:40:15.074550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.331 [2024-07-24 21:40:15.074561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:80824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.331 [2024-07-24 21:40:15.074570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.331 [2024-07-24 21:40:15.074582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:80832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.331 [2024-07-24 21:40:15.074591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.331 [2024-07-24 21:40:15.074602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:80840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.331 [2024-07-24 21:40:15.074611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.331 [2024-07-24 21:40:15.074632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:80848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.331 [2024-07-24 21:40:15.074643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.331 [2024-07-24 21:40:15.074659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.331 [2024-07-24 21:40:15.074669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.331 [2024-07-24 21:40:15.074681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:80864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.331 [2024-07-24 21:40:15.074690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.331 [2024-07-24 21:40:15.074701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:80872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.331 [2024-07-24 21:40:15.074710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.331 [2024-07-24 21:40:15.074721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:80880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.331 [2024-07-24 21:40:15.074731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.331 [2024-07-24 21:40:15.074742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:80888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.331 [2024-07-24 21:40:15.074751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.331 [2024-07-24 21:40:15.074762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:80384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.331 [2024-07-24 21:40:15.074771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.331 [2024-07-24 21:40:15.074782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:80392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.331 [2024-07-24 21:40:15.074791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.331 [2024-07-24 21:40:15.074802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:80400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.331 [2024-07-24 21:40:15.074811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.331 [2024-07-24 21:40:15.074823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:80408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.331 [2024-07-24 21:40:15.074832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.331 [2024-07-24 21:40:15.074843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:80416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.331 [2024-07-24 21:40:15.074852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.331 [2024-07-24 21:40:15.074863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:80424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.331 [2024-07-24 21:40:15.074873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.331 [2024-07-24 21:40:15.074884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:80432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.331 [2024-07-24 21:40:15.074893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.331 [2024-07-24 21:40:15.074904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:80440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.331 [2024-07-24 21:40:15.074913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.331 [2024-07-24 21:40:15.074923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:80896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.332 [2024-07-24 21:40:15.074932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.332 [2024-07-24 21:40:15.074943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:80904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.332 [2024-07-24 21:40:15.074952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.332 [2024-07-24 21:40:15.074963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:80912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.332 [2024-07-24 21:40:15.074972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.332 [2024-07-24 21:40:15.074983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:80920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.332 [2024-07-24 21:40:15.074993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.332 [2024-07-24 21:40:15.075004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:80928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.332 [2024-07-24 21:40:15.075013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.332 [2024-07-24 21:40:15.075024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:80936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.332 [2024-07-24 21:40:15.075033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.332 [2024-07-24 21:40:15.075044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:80944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.332 [2024-07-24 21:40:15.075053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.332 [2024-07-24 21:40:15.075064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:80952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.332 [2024-07-24 21:40:15.075074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.332 [2024-07-24 21:40:15.075085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:80960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.332 [2024-07-24 21:40:15.075094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.332 [2024-07-24 21:40:15.075115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:80968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.332 [2024-07-24 21:40:15.075125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.332 [2024-07-24 21:40:15.075136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:80976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.332 [2024-07-24 21:40:15.075145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.332 [2024-07-24 21:40:15.075156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:80984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.332 [2024-07-24 21:40:15.075165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.332 [2024-07-24 21:40:15.075176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:80992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.332 [2024-07-24 21:40:15.075185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.332 [2024-07-24 21:40:15.075196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:81000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.332 [2024-07-24 21:40:15.075205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.332 [2024-07-24 21:40:15.075216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:81008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.332 [2024-07-24 21:40:15.075225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.332 [2024-07-24 21:40:15.075236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:81016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.332 [2024-07-24 21:40:15.075245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.332 [2024-07-24 21:40:15.075256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:80448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.332 [2024-07-24 21:40:15.075265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.332 [2024-07-24 21:40:15.075276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:80456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.332 [2024-07-24 21:40:15.075285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.332 [2024-07-24 21:40:15.075296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:80464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.332 [2024-07-24 21:40:15.075305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.332 [2024-07-24 21:40:15.075317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:80472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.332 [2024-07-24 21:40:15.075326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.332 [2024-07-24 21:40:15.075338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:80480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.332 [2024-07-24 21:40:15.075347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.332 [2024-07-24 21:40:15.075359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:80488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.332 [2024-07-24 21:40:15.075368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.332 [2024-07-24 21:40:15.075379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:80496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.332 [2024-07-24 21:40:15.075388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.332 [2024-07-24 21:40:15.075398] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9f040 is same with the state(5) to be set 00:18:30.332 [2024-07-24 21:40:15.075410] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:30.332 [2024-07-24 21:40:15.075418] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:30.332 [2024-07-24 21:40:15.075426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80504 len:8 PRP1 0x0 PRP2 0x0 00:18:30.332 [2024-07-24 21:40:15.075435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.332 [2024-07-24 21:40:15.075445] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:30.332 [2024-07-24 21:40:15.075453] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:30.332 [2024-07-24 21:40:15.075461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81024 len:8 PRP1 0x0 PRP2 0x0 00:18:30.332 [2024-07-24 21:40:15.075469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.332 [2024-07-24 21:40:15.075479] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:30.332 [2024-07-24 21:40:15.075486] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:30.332 [2024-07-24 21:40:15.075494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81032 len:8 PRP1 0x0 PRP2 0x0 00:18:30.332 [2024-07-24 21:40:15.075503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.332 [2024-07-24 21:40:15.075512] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:30.332 [2024-07-24 21:40:15.075519] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:30.332 [2024-07-24 21:40:15.075527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81040 len:8 PRP1 0x0 PRP2 0x0 00:18:30.332 [2024-07-24 21:40:15.075536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.332 [2024-07-24 21:40:15.075545] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:30.332 [2024-07-24 21:40:15.075552] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:30.332 [2024-07-24 21:40:15.075560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81048 len:8 PRP1 0x0 PRP2 0x0 00:18:30.332 [2024-07-24 21:40:15.075568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.332 [2024-07-24 21:40:15.075578] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:30.332 [2024-07-24 21:40:15.075585] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:30.332 [2024-07-24 21:40:15.075593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81056 len:8 PRP1 0x0 PRP2 0x0 00:18:30.332 [2024-07-24 21:40:15.075602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.332 [2024-07-24 21:40:15.075630] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:30.332 [2024-07-24 21:40:15.075640] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:30.332 [2024-07-24 21:40:15.075648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81064 len:8 PRP1 0x0 PRP2 0x0 00:18:30.332 [2024-07-24 21:40:15.075657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.332 [2024-07-24 21:40:15.075666] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:30.332 [2024-07-24 21:40:15.075673] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:30.332 [2024-07-24 21:40:15.075681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81072 len:8 PRP1 0x0 PRP2 0x0 00:18:30.332 [2024-07-24 21:40:15.075690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.332 [2024-07-24 21:40:15.075699] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:30.332 [2024-07-24 21:40:15.075706] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:30.332 [2024-07-24 21:40:15.075714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81080 len:8 PRP1 0x0 PRP2 0x0 00:18:30.332 [2024-07-24 21:40:15.075723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.332 [2024-07-24 21:40:15.075732] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:30.332 [2024-07-24 21:40:15.075739] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:30.332 [2024-07-24 21:40:15.075747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81088 len:8 PRP1 0x0 PRP2 0x0 00:18:30.333 [2024-07-24 21:40:15.075755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.333 [2024-07-24 21:40:15.075765] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:30.333 [2024-07-24 21:40:15.075772] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:30.333 [2024-07-24 21:40:15.075780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81096 len:8 PRP1 0x0 PRP2 0x0 00:18:30.333 [2024-07-24 21:40:15.075789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.333 [2024-07-24 21:40:15.075797] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:30.333 [2024-07-24 21:40:15.075804] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:30.333 [2024-07-24 21:40:15.075812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81104 len:8 PRP1 0x0 PRP2 0x0 00:18:30.333 [2024-07-24 21:40:15.075821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.333 [2024-07-24 21:40:15.075830] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:30.333 [2024-07-24 21:40:15.075837] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:30.333 [2024-07-24 21:40:15.075845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81112 len:8 PRP1 0x0 PRP2 0x0 00:18:30.333 [2024-07-24 21:40:15.075854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.333 [2024-07-24 21:40:15.075863] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:30.333 [2024-07-24 21:40:15.075870] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:30.333 [2024-07-24 21:40:15.075878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81120 len:8 PRP1 0x0 PRP2 0x0 00:18:30.333 [2024-07-24 21:40:15.075887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.333 [2024-07-24 21:40:15.075901] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:30.333 [2024-07-24 21:40:15.075909] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:30.333 [2024-07-24 21:40:15.075918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81128 len:8 PRP1 0x0 PRP2 0x0 00:18:30.333 [2024-07-24 21:40:15.075927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.333 [2024-07-24 21:40:15.075936] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:30.333 [2024-07-24 21:40:15.075943] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:30.333 [2024-07-24 21:40:15.075950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81136 len:8 PRP1 0x0 PRP2 0x0 00:18:30.333 [2024-07-24 21:40:15.075959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.333 [2024-07-24 21:40:15.075968] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:30.333 [2024-07-24 21:40:15.075975] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:30.333 [2024-07-24 21:40:15.075983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81144 len:8 PRP1 0x0 PRP2 0x0 00:18:30.333 [2024-07-24 21:40:15.075992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.333 [2024-07-24 21:40:15.076001] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:30.333 [2024-07-24 21:40:15.076008] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:30.333 [2024-07-24 21:40:15.076015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81152 len:8 PRP1 0x0 PRP2 0x0 00:18:30.333 [2024-07-24 21:40:15.076024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.333 [2024-07-24 21:40:15.076033] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:30.333 [2024-07-24 21:40:15.076040] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:30.333 [2024-07-24 21:40:15.076048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81160 len:8 PRP1 0x0 PRP2 0x0 00:18:30.333 [2024-07-24 21:40:15.076057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.333 [2024-07-24 21:40:15.076066] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:30.333 [2024-07-24 21:40:15.076073] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:30.333 [2024-07-24 21:40:15.076080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81168 len:8 PRP1 0x0 PRP2 0x0 00:18:30.333 [2024-07-24 21:40:15.076089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.333 [2024-07-24 21:40:15.076098] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:30.333 [2024-07-24 21:40:15.076105] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:30.333 [2024-07-24 21:40:15.076113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81176 len:8 PRP1 0x0 PRP2 0x0 00:18:30.333 [2024-07-24 21:40:15.076121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.333 [2024-07-24 21:40:15.076173] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1d9f040 was disconnected and freed. reset controller. 00:18:30.333 [2024-07-24 21:40:15.076411] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:30.333 [2024-07-24 21:40:15.076489] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2fd40 (9): Bad file descriptor 00:18:30.333 [2024-07-24 21:40:15.076594] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:30.333 [2024-07-24 21:40:15.076633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2fd40 with addr=10.0.0.2, port=4420 00:18:30.333 [2024-07-24 21:40:15.076647] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2fd40 is same with the state(5) to be set 00:18:30.333 [2024-07-24 21:40:15.076665] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2fd40 (9): Bad file descriptor 00:18:30.333 [2024-07-24 21:40:15.076681] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:30.333 [2024-07-24 21:40:15.076690] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:30.333 21:40:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:18:30.333 [2024-07-24 21:40:15.090855] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:30.333 [2024-07-24 21:40:15.090904] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:30.333 [2024-07-24 21:40:15.090919] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:31.269 [2024-07-24 21:40:16.091070] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:31.269 [2024-07-24 21:40:16.091166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2fd40 with addr=10.0.0.2, port=4420 00:18:31.269 [2024-07-24 21:40:16.091183] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2fd40 is same with the state(5) to be set 00:18:31.269 [2024-07-24 21:40:16.091211] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2fd40 (9): Bad file descriptor 00:18:31.269 [2024-07-24 21:40:16.091230] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:31.269 [2024-07-24 21:40:16.091240] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:31.269 [2024-07-24 21:40:16.091251] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:31.269 [2024-07-24 21:40:16.091277] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:31.269 [2024-07-24 21:40:16.091288] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:32.205 [2024-07-24 21:40:17.091461] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:32.205 [2024-07-24 21:40:17.091547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2fd40 with addr=10.0.0.2, port=4420 00:18:32.205 [2024-07-24 21:40:17.091577] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2fd40 is same with the state(5) to be set 00:18:32.205 [2024-07-24 21:40:17.091602] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2fd40 (9): Bad file descriptor 00:18:32.205 [2024-07-24 21:40:17.091621] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:32.205 [2024-07-24 21:40:17.091630] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:32.205 [2024-07-24 21:40:17.091641] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:32.205 [2024-07-24 21:40:17.091698] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:32.205 [2024-07-24 21:40:17.091712] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:33.141 [2024-07-24 21:40:18.092270] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:33.141 [2024-07-24 21:40:18.092323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2fd40 with addr=10.0.0.2, port=4420 00:18:33.141 [2024-07-24 21:40:18.092339] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2fd40 is same with the state(5) to be set 00:18:33.141 21:40:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:33.141 [2024-07-24 21:40:18.092604] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2fd40 (9): Bad file descriptor 00:18:33.141 [2024-07-24 21:40:18.092858] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:33.141 [2024-07-24 21:40:18.092872] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:33.141 [2024-07-24 21:40:18.092882] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:33.141 [2024-07-24 21:40:18.096747] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:33.141 [2024-07-24 21:40:18.096778] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:33.400 [2024-07-24 21:40:18.304894] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:33.400 21:40:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@103 -- # wait 81372 00:18:34.335 [2024-07-24 21:40:19.134002] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:39.651 00:18:39.651 Latency(us) 00:18:39.651 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:39.651 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:39.651 Verification LBA range: start 0x0 length 0x4000 00:18:39.651 NVMe0n1 : 10.01 5521.36 21.57 3789.60 0.00 13718.52 677.70 3019898.88 00:18:39.651 =================================================================================================================== 00:18:39.651 Total : 5521.36 21.57 3789.60 0.00 13718.52 0.00 3019898.88 00:18:39.651 0 00:18:39.651 21:40:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 81245 00:18:39.651 21:40:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@950 -- # '[' -z 81245 ']' 00:18:39.651 21:40:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # kill -0 81245 00:18:39.651 21:40:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # uname 00:18:39.651 21:40:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:39.651 21:40:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81245 00:18:39.651 killing process with pid 81245 00:18:39.651 Received shutdown signal, test time was about 10.000000 seconds 00:18:39.651 00:18:39.651 Latency(us) 00:18:39.651 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:39.651 =================================================================================================================== 00:18:39.651 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:39.651 21:40:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:18:39.651 21:40:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:18:39.651 21:40:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81245' 00:18:39.651 21:40:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@969 -- # kill 81245 00:18:39.651 21:40:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@974 -- # wait 81245 00:18:39.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:39.651 21:40:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=81482 00:18:39.651 21:40:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:18:39.651 21:40:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 81482 /var/tmp/bdevperf.sock 00:18:39.651 21:40:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@831 -- # '[' -z 81482 ']' 00:18:39.651 21:40:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:39.651 21:40:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:39.651 21:40:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:39.651 21:40:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:39.651 21:40:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:39.651 [2024-07-24 21:40:24.285724] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:18:39.651 [2024-07-24 21:40:24.286573] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81482 ] 00:18:39.651 [2024-07-24 21:40:24.422690] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:39.651 [2024-07-24 21:40:24.513704] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:39.651 [2024-07-24 21:40:24.568851] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:39.651 21:40:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:39.651 21:40:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # return 0 00:18:39.651 21:40:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81482 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:18:39.651 21:40:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=81495 00:18:39.651 21:40:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:18:40.218 21:40:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:18:40.218 NVMe0n1 00:18:40.477 21:40:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=81532 00:18:40.477 21:40:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:18:40.477 21:40:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:40.477 Running I/O for 10 seconds... 00:18:41.414 21:40:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:41.676 [2024-07-24 21:40:26.486560] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf34790 is same with the state(5) to be set 00:18:41.676 [2024-07-24 21:40:26.486614] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf34790 is same with the state(5) to be set 00:18:41.676 [2024-07-24 21:40:26.486692] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf34790 is same with the state(5) to be set 00:18:41.676 [2024-07-24 21:40:26.486704] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf34790 is same with the state(5) to be set 00:18:41.676 [2024-07-24 21:40:26.486713] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf34790 is same with the state(5) to be set 00:18:41.676 [2024-07-24 21:40:26.486722] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf34790 is same with the state(5) to be set 00:18:41.676 [2024-07-24 21:40:26.486730] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf34790 is same with the state(5) to be set 00:18:41.676 [2024-07-24 21:40:26.486739] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf34790 is same with the state(5) to be set 00:18:41.676 [2024-07-24 21:40:26.486748] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf34790 is same with the state(5) to be set 00:18:41.676 [2024-07-24 21:40:26.486756] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf34790 is same with the state(5) to be set 00:18:41.676 [2024-07-24 21:40:26.486765] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf34790 is same with the state(5) to be set 00:18:41.676 [2024-07-24 21:40:26.486773] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf34790 is same with the state(5) to be set 00:18:41.676 [2024-07-24 21:40:26.486782] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf34790 is same with the state(5) to be set 00:18:41.676 [2024-07-24 21:40:26.486790] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf34790 is same with the state(5) to be set 00:18:41.676 [2024-07-24 21:40:26.486799] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf34790 is same with the state(5) to be set 00:18:41.676 [2024-07-24 21:40:26.486807] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf34790 is same with the state(5) to be set 00:18:41.676 [2024-07-24 21:40:26.486816] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf34790 is same with the state(5) to be set 00:18:41.676 [2024-07-24 21:40:26.486824] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf34790 is same with the state(5) to be set 00:18:41.676 [2024-07-24 21:40:26.486833] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf34790 is same with the state(5) to be set 00:18:41.676 [2024-07-24 21:40:26.486841] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf34790 is same with the state(5) to be set 00:18:41.676 [2024-07-24 21:40:26.486849] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf34790 is same with the state(5) to be set 00:18:41.676 [2024-07-24 21:40:26.486858] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf34790 is same with the state(5) to be set 00:18:41.676 [2024-07-24 21:40:26.486866] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf34790 is same with the state(5) to be set 00:18:41.676 [2024-07-24 21:40:26.486875] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf34790 is same with the state(5) to be set 00:18:41.676 [2024-07-24 21:40:26.486883] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf34790 is same with the state(5) to be set 00:18:41.676 [2024-07-24 21:40:26.486891] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf34790 is same with the state(5) to be set 00:18:41.676 [2024-07-24 21:40:26.486900] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf34790 is same with the state(5) to be set 00:18:41.676 [2024-07-24 21:40:26.486908] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf34790 is same with the state(5) to be set 00:18:41.676 [2024-07-24 21:40:26.486916] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf34790 is same with the state(5) to be set 00:18:41.676 [2024-07-24 21:40:26.486924] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf34790 is same with the state(5) to be set 00:18:41.676 [2024-07-24 21:40:26.486932] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf34790 is same with the state(5) to be set 00:18:41.676 [2024-07-24 21:40:26.486940] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf34790 is same with the state(5) to be set 00:18:41.676 [2024-07-24 21:40:26.486949] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf34790 is same with the state(5) to be set 00:18:41.676 [2024-07-24 21:40:26.486957] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf34790 is same with the state(5) to be set 00:18:41.676 [2024-07-24 21:40:26.486966] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf34790 is same with the state(5) to be set 00:18:41.676 [2024-07-24 21:40:26.486975] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf34790 is same with the state(5) to be set 00:18:41.676 [2024-07-24 21:40:26.486983] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf34790 is same with the state(5) to be set 00:18:41.676 [2024-07-24 21:40:26.486991] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf34790 is same with the state(5) to be set 00:18:41.676 [2024-07-24 21:40:26.487000] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf34790 is same with the state(5) to be set 00:18:41.676 [2024-07-24 21:40:26.487008] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf34790 is same with the state(5) to be set 00:18:41.677 [2024-07-24 21:40:26.487016] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf34790 is same with the state(5) to be set 00:18:41.677 [2024-07-24 21:40:26.487024] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf34790 is same with the state(5) to be set 00:18:41.677 [2024-07-24 21:40:26.487032] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf34790 is same with the state(5) to be set 00:18:41.677 [2024-07-24 21:40:26.487040] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf34790 is same with the state(5) to be set 00:18:41.677 [2024-07-24 21:40:26.487049] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf34790 is same with the state(5) to be set 00:18:41.677 [2024-07-24 21:40:26.487056] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf34790 is same with the state(5) to be set 00:18:41.677 [2024-07-24 21:40:26.487064] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf34790 is same with the state(5) to be set 00:18:41.677 [2024-07-24 21:40:26.487072] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf34790 is same with the state(5) to be set 00:18:41.677 [2024-07-24 21:40:26.487080] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf34790 is same with the state(5) to be set 00:18:41.677 [2024-07-24 21:40:26.487088] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf34790 is same with the state(5) to be set 00:18:41.677 [2024-07-24 21:40:26.487096] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf34790 is same with the state(5) to be set 00:18:41.677 [2024-07-24 21:40:26.487104] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf34790 is same with the state(5) to be set 00:18:41.677 [2024-07-24 21:40:26.487124] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf34790 is same with the state(5) to be set 00:18:41.677 [2024-07-24 21:40:26.487133] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf34790 is same with the state(5) to be set 00:18:41.677 [2024-07-24 21:40:26.487141] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf34790 is same with the state(5) to be set 00:18:41.677 [2024-07-24 21:40:26.487150] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf34790 is same with the state(5) to be set 00:18:41.677 [2024-07-24 21:40:26.487158] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf34790 is same with the state(5) to be set 00:18:41.677 [2024-07-24 21:40:26.487166] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf34790 is same with the state(5) to be set 00:18:41.677 [2024-07-24 21:40:26.487174] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf34790 is same with the state(5) to be set 00:18:41.677 [2024-07-24 21:40:26.487189] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf34790 is same with the state(5) to be set 00:18:41.677 [2024-07-24 21:40:26.487198] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf34790 is same with the state(5) to be set 00:18:41.677 [2024-07-24 21:40:26.487206] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf34790 is same with the state(5) to be set 00:18:41.677 [2024-07-24 21:40:26.487222] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf34790 is same with the state(5) to be set 00:18:41.677 [2024-07-24 21:40:26.487231] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf34790 is same with the state(5) to be set 00:18:41.677 [2024-07-24 21:40:26.487239] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf34790 is same with the state(5) to be set 00:18:41.677 [2024-07-24 21:40:26.487248] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf34790 is same with the state(5) to be set 00:18:41.677 [2024-07-24 21:40:26.487256] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf34790 is same with the state(5) to be set 00:18:41.677 [2024-07-24 21:40:26.487266] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf34790 is same with the state(5) to be set 00:18:41.677 [2024-07-24 21:40:26.487274] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf34790 is same with the state(5) to be set 00:18:41.677 [2024-07-24 21:40:26.487282] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf34790 is same with the state(5) to be set 00:18:41.677 [2024-07-24 21:40:26.487291] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf34790 is same with the state(5) to be set 00:18:41.677 [2024-07-24 21:40:26.487299] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf34790 is same with the state(5) to be set 00:18:41.677 [2024-07-24 21:40:26.487307] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf34790 is same with the state(5) to be set 00:18:41.677 [2024-07-24 21:40:26.487315] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf34790 is same with the state(5) to be set 00:18:41.677 [2024-07-24 21:40:26.487324] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf34790 is same with the state(5) to be set 00:18:41.677 [2024-07-24 21:40:26.487332] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf34790 is same with the state(5) to be set 00:18:41.677 [2024-07-24 21:40:26.487340] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf34790 is same with the state(5) to be set 00:18:41.677 [2024-07-24 21:40:26.487348] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf34790 is same with the state(5) to be set 00:18:41.677 [2024-07-24 21:40:26.487356] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf34790 is same with the state(5) to be set 00:18:41.677 [2024-07-24 21:40:26.487365] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf34790 is same with the state(5) to be set 00:18:41.677 [2024-07-24 21:40:26.487373] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf34790 is same with the state(5) to be set 00:18:41.677 [2024-07-24 21:40:26.487381] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf34790 is same with the state(5) to be set 00:18:41.677 [2024-07-24 21:40:26.487389] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf34790 is same with the state(5) to be set 00:18:41.677 [2024-07-24 21:40:26.487398] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf34790 is same with the state(5) to be set 00:18:41.677 [2024-07-24 21:40:26.487406] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf34790 is same with the state(5) to be set 00:18:41.677 [2024-07-24 21:40:26.487414] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf34790 is same with the state(5) to be set 00:18:41.677 [2024-07-24 21:40:26.487422] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf34790 is same with the state(5) to be set 00:18:41.677 [2024-07-24 21:40:26.487430] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf34790 is same with the state(5) to be set 00:18:41.677 [2024-07-24 21:40:26.487439] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf34790 is same with the state(5) to be set 00:18:41.677 [2024-07-24 21:40:26.487447] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf34790 is same with the state(5) to be set 00:18:41.677 [2024-07-24 21:40:26.487457] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf34790 is same with the state(5) to be set 00:18:41.677 [2024-07-24 21:40:26.487466] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf34790 is same with the state(5) to be set 00:18:41.677 [2024-07-24 21:40:26.487474] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf34790 is same with the state(5) to be set 00:18:41.677 [2024-07-24 21:40:26.487483] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf34790 is same with the state(5) to be set 00:18:41.677 [2024-07-24 21:40:26.487491] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf34790 is same with the state(5) to be set 00:18:41.677 [2024-07-24 21:40:26.487499] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf34790 is same with the state(5) to be set 00:18:41.677 [2024-07-24 21:40:26.487508] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf34790 is same with the state(5) to be set 00:18:41.677 [2024-07-24 21:40:26.487517] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf34790 is same with the state(5) to be set 00:18:41.677 [2024-07-24 21:40:26.487526] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf34790 is same with the state(5) to be set 00:18:41.677 [2024-07-24 21:40:26.487534] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf34790 is same with the state(5) to be set 00:18:41.677 [2024-07-24 21:40:26.487543] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf34790 is same with the state(5) to be set 00:18:41.677 [2024-07-24 21:40:26.487551] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf34790 is same with the state(5) to be set 00:18:41.677 [2024-07-24 21:40:26.487559] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf34790 is same with the state(5) to be set 00:18:41.677 [2024-07-24 21:40:26.487568] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf34790 is same with the state(5) to be set 00:18:41.677 [2024-07-24 21:40:26.487576] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf34790 is same with the state(5) to be set 00:18:41.677 [2024-07-24 21:40:26.487584] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf34790 is same with the state(5) to be set 00:18:41.677 [2024-07-24 21:40:26.487592] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf34790 is same with the state(5) to be set 00:18:41.677 [2024-07-24 21:40:26.487601] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf34790 is same with the state(5) to be set 00:18:41.677 [2024-07-24 21:40:26.487609] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf34790 is same with the state(5) to be set 00:18:41.677 [2024-07-24 21:40:26.487617] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf34790 is same with the state(5) to be set 00:18:41.677 [2024-07-24 21:40:26.487635] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf34790 is same with the state(5) to be set 00:18:41.677 [2024-07-24 21:40:26.487645] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf34790 is same with the state(5) to be set 00:18:41.677 [2024-07-24 21:40:26.487653] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf34790 is same with the state(5) to be set 00:18:41.677 [2024-07-24 21:40:26.487661] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf34790 is same with the state(5) to be set 00:18:41.677 [2024-07-24 21:40:26.487670] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf34790 is same with the state(5) to be set 00:18:41.677 [2024-07-24 21:40:26.487678] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf34790 is same with the state(5) to be set 00:18:41.677 [2024-07-24 21:40:26.487686] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf34790 is same with the state(5) to be set 00:18:41.677 [2024-07-24 21:40:26.487695] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf34790 is same with the state(5) to be set 00:18:41.677 [2024-07-24 21:40:26.487703] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf34790 is same with the state(5) to be set 00:18:41.677 [2024-07-24 21:40:26.487711] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf34790 is same with the state(5) to be set 00:18:41.677 [2024-07-24 21:40:26.487719] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf34790 is same with the state(5) to be set 00:18:41.677 [2024-07-24 21:40:26.487727] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf34790 is same with the state(5) to be set 00:18:41.677 [2024-07-24 21:40:26.487736] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf34790 is same with the state(5) to be set 00:18:41.678 [2024-07-24 21:40:26.487746] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf34790 is same with the state(5) to be set 00:18:41.678 [2024-07-24 21:40:26.487803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:99560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.678 [2024-07-24 21:40:26.487831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.678 [2024-07-24 21:40:26.487853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:88208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.678 [2024-07-24 21:40:26.487864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.678 [2024-07-24 21:40:26.487876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:42168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.678 [2024-07-24 21:40:26.487886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.678 [2024-07-24 21:40:26.487897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:92272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.678 [2024-07-24 21:40:26.487906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.678 [2024-07-24 21:40:26.487918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.678 [2024-07-24 21:40:26.487927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.678 [2024-07-24 21:40:26.487938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:105848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.678 [2024-07-24 21:40:26.487947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.678 [2024-07-24 21:40:26.487959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:70872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.678 [2024-07-24 21:40:26.487968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.678 [2024-07-24 21:40:26.487979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.678 [2024-07-24 21:40:26.487988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.678 [2024-07-24 21:40:26.487999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.678 [2024-07-24 21:40:26.488009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.678 [2024-07-24 21:40:26.488019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:44480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.678 [2024-07-24 21:40:26.488029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.678 [2024-07-24 21:40:26.488040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:64064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.678 [2024-07-24 21:40:26.488049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.678 [2024-07-24 21:40:26.488060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:125992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.678 [2024-07-24 21:40:26.488069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.678 [2024-07-24 21:40:26.488080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:81176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.678 [2024-07-24 21:40:26.488090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.678 [2024-07-24 21:40:26.488101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.678 [2024-07-24 21:40:26.488110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.678 [2024-07-24 21:40:26.488121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:70136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.678 [2024-07-24 21:40:26.488131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.678 [2024-07-24 21:40:26.488142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:89840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.678 [2024-07-24 21:40:26.488151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.678 [2024-07-24 21:40:26.488162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:85120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.678 [2024-07-24 21:40:26.488174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.678 [2024-07-24 21:40:26.488185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:123464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.678 [2024-07-24 21:40:26.488195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.678 [2024-07-24 21:40:26.488206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:43112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.678 [2024-07-24 21:40:26.488216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.678 [2024-07-24 21:40:26.488227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:47008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.678 [2024-07-24 21:40:26.488236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.678 [2024-07-24 21:40:26.488247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:42928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.678 [2024-07-24 21:40:26.488256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.678 [2024-07-24 21:40:26.488268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:85048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.678 [2024-07-24 21:40:26.488277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.678 [2024-07-24 21:40:26.488288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:35200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.678 [2024-07-24 21:40:26.488297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.678 [2024-07-24 21:40:26.488309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:57992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.678 [2024-07-24 21:40:26.488318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.678 [2024-07-24 21:40:26.488329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:71776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.678 [2024-07-24 21:40:26.488339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.678 [2024-07-24 21:40:26.488351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:117968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.678 [2024-07-24 21:40:26.488360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.678 [2024-07-24 21:40:26.488371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:12480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.678 [2024-07-24 21:40:26.488380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.678 [2024-07-24 21:40:26.488392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:16256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.678 [2024-07-24 21:40:26.488401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.678 [2024-07-24 21:40:26.488413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:73648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.678 [2024-07-24 21:40:26.488422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.678 [2024-07-24 21:40:26.488433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:72624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.678 [2024-07-24 21:40:26.488442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.678 [2024-07-24 21:40:26.488454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:22816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.678 [2024-07-24 21:40:26.488463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.678 [2024-07-24 21:40:26.488474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:86480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.678 [2024-07-24 21:40:26.488483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.678 [2024-07-24 21:40:26.488494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:26216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.678 [2024-07-24 21:40:26.488504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.678 [2024-07-24 21:40:26.488515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:48112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.678 [2024-07-24 21:40:26.488524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.678 [2024-07-24 21:40:26.488536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:19712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.678 [2024-07-24 21:40:26.488545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.678 [2024-07-24 21:40:26.488557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:69712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.678 [2024-07-24 21:40:26.488566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.678 [2024-07-24 21:40:26.488577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:68784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.679 [2024-07-24 21:40:26.488586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.679 [2024-07-24 21:40:26.488598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:19480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.679 [2024-07-24 21:40:26.488607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.679 [2024-07-24 21:40:26.488618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:54880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.679 [2024-07-24 21:40:26.488639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.679 [2024-07-24 21:40:26.488651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:87392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.679 [2024-07-24 21:40:26.488660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.679 [2024-07-24 21:40:26.488671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:80464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.679 [2024-07-24 21:40:26.488681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.679 [2024-07-24 21:40:26.488692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:79416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.679 [2024-07-24 21:40:26.488701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.679 [2024-07-24 21:40:26.488712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:77496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.679 [2024-07-24 21:40:26.488722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.679 [2024-07-24 21:40:26.488733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:93896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.679 [2024-07-24 21:40:26.488742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.679 [2024-07-24 21:40:26.488753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:69208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.679 [2024-07-24 21:40:26.488762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.679 [2024-07-24 21:40:26.488774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:90112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.679 [2024-07-24 21:40:26.488783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.679 [2024-07-24 21:40:26.488794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:28400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.679 [2024-07-24 21:40:26.488804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.679 [2024-07-24 21:40:26.488815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:85472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.679 [2024-07-24 21:40:26.488824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.679 [2024-07-24 21:40:26.488836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:28576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.679 [2024-07-24 21:40:26.488845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.679 [2024-07-24 21:40:26.488857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:121440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.679 [2024-07-24 21:40:26.488866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.679 [2024-07-24 21:40:26.488877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:98184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.679 [2024-07-24 21:40:26.488886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.679 [2024-07-24 21:40:26.488898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:120712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.679 [2024-07-24 21:40:26.488907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.679 [2024-07-24 21:40:26.488918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:97344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.679 [2024-07-24 21:40:26.488927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.679 [2024-07-24 21:40:26.488939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:53776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.679 [2024-07-24 21:40:26.488948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.679 [2024-07-24 21:40:26.488960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:100456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.679 [2024-07-24 21:40:26.488969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.679 [2024-07-24 21:40:26.488980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:119808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.679 [2024-07-24 21:40:26.488990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.679 [2024-07-24 21:40:26.489002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.679 [2024-07-24 21:40:26.489012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.679 [2024-07-24 21:40:26.489023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:73472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.679 [2024-07-24 21:40:26.489032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.679 [2024-07-24 21:40:26.489044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:106768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.679 [2024-07-24 21:40:26.489053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.679 [2024-07-24 21:40:26.489064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:81696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.679 [2024-07-24 21:40:26.489073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.679 [2024-07-24 21:40:26.489084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:58896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.679 [2024-07-24 21:40:26.489093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.679 [2024-07-24 21:40:26.489105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:54496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.679 [2024-07-24 21:40:26.489114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.679 [2024-07-24 21:40:26.489125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:57456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.679 [2024-07-24 21:40:26.489135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.679 [2024-07-24 21:40:26.489146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:71736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.679 [2024-07-24 21:40:26.489155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.679 [2024-07-24 21:40:26.489167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:99616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.679 [2024-07-24 21:40:26.489176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.679 [2024-07-24 21:40:26.489187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:103096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.679 [2024-07-24 21:40:26.489196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.679 [2024-07-24 21:40:26.489208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:11512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.679 [2024-07-24 21:40:26.489217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.679 [2024-07-24 21:40:26.489228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:119344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.679 [2024-07-24 21:40:26.489237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.679 [2024-07-24 21:40:26.489248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:127840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.679 [2024-07-24 21:40:26.489257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.679 [2024-07-24 21:40:26.489268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:21280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.679 [2024-07-24 21:40:26.489277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.679 [2024-07-24 21:40:26.489289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:60576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.679 [2024-07-24 21:40:26.489298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.679 [2024-07-24 21:40:26.489309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.679 [2024-07-24 21:40:26.489326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.679 [2024-07-24 21:40:26.489338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:21640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.679 [2024-07-24 21:40:26.489348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.679 [2024-07-24 21:40:26.489359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:89136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.679 [2024-07-24 21:40:26.489368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.679 [2024-07-24 21:40:26.489380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:111840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.679 [2024-07-24 21:40:26.489389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.680 [2024-07-24 21:40:26.489401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:5488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.680 [2024-07-24 21:40:26.489410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.680 [2024-07-24 21:40:26.489421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:66512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.680 [2024-07-24 21:40:26.489430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.680 [2024-07-24 21:40:26.489441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:126832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.680 [2024-07-24 21:40:26.489450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.680 [2024-07-24 21:40:26.489461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:129696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.680 [2024-07-24 21:40:26.489470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.680 [2024-07-24 21:40:26.489481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:46544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.680 [2024-07-24 21:40:26.489490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.680 [2024-07-24 21:40:26.489501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:115688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.680 [2024-07-24 21:40:26.489510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.680 [2024-07-24 21:40:26.489521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:88560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.680 [2024-07-24 21:40:26.489530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.680 [2024-07-24 21:40:26.489542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:41568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.680 [2024-07-24 21:40:26.489551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.680 [2024-07-24 21:40:26.489562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:34136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.680 [2024-07-24 21:40:26.489571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.680 [2024-07-24 21:40:26.489583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:28424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.680 [2024-07-24 21:40:26.489592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.680 [2024-07-24 21:40:26.489603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:99416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.680 [2024-07-24 21:40:26.489612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.680 [2024-07-24 21:40:26.489630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:53256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.680 [2024-07-24 21:40:26.489641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.680 [2024-07-24 21:40:26.489652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:2144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.680 [2024-07-24 21:40:26.489667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.680 [2024-07-24 21:40:26.489679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:116456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.680 [2024-07-24 21:40:26.489688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.680 [2024-07-24 21:40:26.489699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:123576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.680 [2024-07-24 21:40:26.489708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.680 [2024-07-24 21:40:26.489720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:96024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.680 [2024-07-24 21:40:26.489729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.680 [2024-07-24 21:40:26.489741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:104120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.680 [2024-07-24 21:40:26.489750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.680 [2024-07-24 21:40:26.489762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:83112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.680 [2024-07-24 21:40:26.489771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.680 [2024-07-24 21:40:26.489783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:9360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.680 [2024-07-24 21:40:26.489792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.680 [2024-07-24 21:40:26.489803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:117376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.680 [2024-07-24 21:40:26.489812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.680 [2024-07-24 21:40:26.489823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:19360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.680 [2024-07-24 21:40:26.489832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.680 [2024-07-24 21:40:26.489843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:76040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.680 [2024-07-24 21:40:26.489852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.680 [2024-07-24 21:40:26.489863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:107784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.680 [2024-07-24 21:40:26.489873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.680 [2024-07-24 21:40:26.489884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:44208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.680 [2024-07-24 21:40:26.489893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.680 [2024-07-24 21:40:26.489905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:13832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.680 [2024-07-24 21:40:26.489914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.680 [2024-07-24 21:40:26.489925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:85056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.680 [2024-07-24 21:40:26.489934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.680 [2024-07-24 21:40:26.489945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:129704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.680 [2024-07-24 21:40:26.489955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.680 [2024-07-24 21:40:26.489965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:90016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.680 [2024-07-24 21:40:26.489975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.680 [2024-07-24 21:40:26.489986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:124984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.680 [2024-07-24 21:40:26.489996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.680 [2024-07-24 21:40:26.490007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:11784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.680 [2024-07-24 21:40:26.490016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.680 [2024-07-24 21:40:26.490026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:51096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.680 [2024-07-24 21:40:26.490036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.680 [2024-07-24 21:40:26.490047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:38072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.680 [2024-07-24 21:40:26.490055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.680 [2024-07-24 21:40:26.490067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:2040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.680 [2024-07-24 21:40:26.490076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.680 [2024-07-24 21:40:26.490087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:99720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.680 [2024-07-24 21:40:26.490096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.680 [2024-07-24 21:40:26.490107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:117080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.680 [2024-07-24 21:40:26.490115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.680 [2024-07-24 21:40:26.490126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:130312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.680 [2024-07-24 21:40:26.490135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.680 [2024-07-24 21:40:26.490146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:97976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.680 [2024-07-24 21:40:26.490155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.680 [2024-07-24 21:40:26.490166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:102768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.680 [2024-07-24 21:40:26.490175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.680 [2024-07-24 21:40:26.490186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:10496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.680 [2024-07-24 21:40:26.490195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.680 [2024-07-24 21:40:26.490206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:14568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.681 [2024-07-24 21:40:26.490215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.681 [2024-07-24 21:40:26.490226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:26928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.681 [2024-07-24 21:40:26.490235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.681 [2024-07-24 21:40:26.490246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:10248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.681 [2024-07-24 21:40:26.490255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.681 [2024-07-24 21:40:26.490266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:117536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.681 [2024-07-24 21:40:26.490275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.681 [2024-07-24 21:40:26.490286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:26720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.681 [2024-07-24 21:40:26.490295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.681 [2024-07-24 21:40:26.490306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:100296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.681 [2024-07-24 21:40:26.490315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.681 [2024-07-24 21:40:26.490326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:108280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.681 [2024-07-24 21:40:26.490336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.681 [2024-07-24 21:40:26.490347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:131024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.681 [2024-07-24 21:40:26.490356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.681 [2024-07-24 21:40:26.490367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:105176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.681 [2024-07-24 21:40:26.490376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.681 [2024-07-24 21:40:26.490387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:93760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.681 [2024-07-24 21:40:26.490396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.681 [2024-07-24 21:40:26.490407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:115568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.681 [2024-07-24 21:40:26.490416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.681 [2024-07-24 21:40:26.490427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:37560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.681 [2024-07-24 21:40:26.490437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.681 [2024-07-24 21:40:26.490448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:57152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.681 [2024-07-24 21:40:26.490457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.681 [2024-07-24 21:40:26.490467] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13686a0 is same with the state(5) to be set 00:18:41.681 [2024-07-24 21:40:26.490478] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:41.681 [2024-07-24 21:40:26.490486] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:41.681 [2024-07-24 21:40:26.490494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:37888 len:8 PRP1 0x0 PRP2 0x0 00:18:41.681 [2024-07-24 21:40:26.490503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.681 [2024-07-24 21:40:26.490555] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x13686a0 was disconnected and freed. reset controller. 00:18:41.681 [2024-07-24 21:40:26.490658] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:41.681 [2024-07-24 21:40:26.490674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.681 [2024-07-24 21:40:26.490686] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:41.681 [2024-07-24 21:40:26.490695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.681 [2024-07-24 21:40:26.490704] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:41.681 [2024-07-24 21:40:26.490713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.681 [2024-07-24 21:40:26.490723] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:41.681 [2024-07-24 21:40:26.490732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.681 [2024-07-24 21:40:26.490741] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1317c00 is same with the state(5) to be set 00:18:41.681 [2024-07-24 21:40:26.490979] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:41.681 [2024-07-24 21:40:26.491008] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1317c00 (9): Bad file descriptor 00:18:41.681 [2024-07-24 21:40:26.491123] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:41.681 [2024-07-24 21:40:26.491152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1317c00 with addr=10.0.0.2, port=4420 00:18:41.681 [2024-07-24 21:40:26.491164] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1317c00 is same with the state(5) to be set 00:18:41.681 [2024-07-24 21:40:26.491182] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1317c00 (9): Bad file descriptor 00:18:41.681 [2024-07-24 21:40:26.491198] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:41.681 [2024-07-24 21:40:26.491207] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:41.681 [2024-07-24 21:40:26.491218] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:41.681 [2024-07-24 21:40:26.491238] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:41.681 [2024-07-24 21:40:26.491249] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:41.681 21:40:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@128 -- # wait 81532 00:18:43.584 [2024-07-24 21:40:28.491659] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:43.584 [2024-07-24 21:40:28.491715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1317c00 with addr=10.0.0.2, port=4420 00:18:43.584 [2024-07-24 21:40:28.491732] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1317c00 is same with the state(5) to be set 00:18:43.584 [2024-07-24 21:40:28.491758] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1317c00 (9): Bad file descriptor 00:18:43.584 [2024-07-24 21:40:28.491777] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:43.584 [2024-07-24 21:40:28.491787] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:43.584 [2024-07-24 21:40:28.491798] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:43.584 [2024-07-24 21:40:28.491825] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:43.584 [2024-07-24 21:40:28.491836] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:46.114 [2024-07-24 21:40:30.492104] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:46.114 [2024-07-24 21:40:30.492166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1317c00 with addr=10.0.0.2, port=4420 00:18:46.114 [2024-07-24 21:40:30.492182] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1317c00 is same with the state(5) to be set 00:18:46.114 [2024-07-24 21:40:30.492208] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1317c00 (9): Bad file descriptor 00:18:46.114 [2024-07-24 21:40:30.492227] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:46.114 [2024-07-24 21:40:30.492237] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:46.114 [2024-07-24 21:40:30.492248] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:46.114 [2024-07-24 21:40:30.492276] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:46.114 [2024-07-24 21:40:30.492288] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:48.014 [2024-07-24 21:40:32.492440] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:48.014 [2024-07-24 21:40:32.492501] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:48.014 [2024-07-24 21:40:32.492514] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:48.014 [2024-07-24 21:40:32.492525] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:18:48.014 [2024-07-24 21:40:32.492552] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:48.580 00:18:48.580 Latency(us) 00:18:48.580 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:48.580 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:18:48.580 NVMe0n1 : 8.16 2116.10 8.27 15.68 0.00 59940.14 7983.48 7015926.69 00:18:48.580 =================================================================================================================== 00:18:48.580 Total : 2116.10 8.27 15.68 0.00 59940.14 7983.48 7015926.69 00:18:48.580 0 00:18:48.580 21:40:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:48.580 Attaching 5 probes... 00:18:48.580 1342.035534: reset bdev controller NVMe0 00:18:48.580 1342.101904: reconnect bdev controller NVMe0 00:18:48.580 3342.540169: reconnect delay bdev controller NVMe0 00:18:48.580 3342.575519: reconnect bdev controller NVMe0 00:18:48.580 5343.021947: reconnect delay bdev controller NVMe0 00:18:48.580 5343.056557: reconnect bdev controller NVMe0 00:18:48.580 7343.471771: reconnect delay bdev controller NVMe0 00:18:48.580 7343.493639: reconnect bdev controller NVMe0 00:18:48.580 21:40:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:18:48.580 21:40:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:18:48.580 21:40:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@136 -- # kill 81495 00:18:48.580 21:40:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:48.580 21:40:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 81482 00:18:48.580 21:40:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@950 -- # '[' -z 81482 ']' 00:18:48.580 21:40:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # kill -0 81482 00:18:48.580 21:40:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # uname 00:18:48.580 21:40:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:48.580 21:40:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81482 00:18:48.580 killing process with pid 81482 00:18:48.580 Received shutdown signal, test time was about 8.220897 seconds 00:18:48.580 00:18:48.580 Latency(us) 00:18:48.580 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:48.580 =================================================================================================================== 00:18:48.580 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:48.580 21:40:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:18:48.580 21:40:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:18:48.580 21:40:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81482' 00:18:48.580 21:40:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@969 -- # kill 81482 00:18:48.580 21:40:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@974 -- # wait 81482 00:18:48.837 21:40:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:49.094 21:40:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:18:49.094 21:40:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:18:49.094 21:40:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:49.094 21:40:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@117 -- # sync 00:18:49.094 21:40:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:49.094 21:40:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@120 -- # set +e 00:18:49.094 21:40:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:49.094 21:40:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:49.094 rmmod nvme_tcp 00:18:49.094 rmmod nvme_fabrics 00:18:49.094 rmmod nvme_keyring 00:18:49.094 21:40:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:49.094 21:40:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@124 -- # set -e 00:18:49.094 21:40:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@125 -- # return 0 00:18:49.094 21:40:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@489 -- # '[' -n 81046 ']' 00:18:49.094 21:40:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@490 -- # killprocess 81046 00:18:49.094 21:40:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@950 -- # '[' -z 81046 ']' 00:18:49.094 21:40:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # kill -0 81046 00:18:49.094 21:40:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # uname 00:18:49.094 21:40:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:49.353 21:40:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81046 00:18:49.353 killing process with pid 81046 00:18:49.353 21:40:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:49.353 21:40:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:49.353 21:40:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81046' 00:18:49.353 21:40:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@969 -- # kill 81046 00:18:49.353 21:40:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@974 -- # wait 81046 00:18:49.611 21:40:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:49.611 21:40:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:49.611 21:40:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:49.611 21:40:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:49.611 21:40:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:49.611 21:40:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:49.611 21:40:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:49.611 21:40:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:49.611 21:40:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:49.611 00:18:49.611 real 0m46.426s 00:18:49.611 user 2m16.094s 00:18:49.611 sys 0m5.415s 00:18:49.611 21:40:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:49.611 21:40:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:49.611 ************************************ 00:18:49.611 END TEST nvmf_timeout 00:18:49.611 ************************************ 00:18:49.611 21:40:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ virt == phy ]] 00:18:49.611 21:40:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:18:49.611 00:18:49.611 real 5m6.691s 00:18:49.611 user 13m16.613s 00:18:49.611 sys 1m10.359s 00:18:49.611 21:40:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:49.611 21:40:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:49.611 ************************************ 00:18:49.611 END TEST nvmf_host 00:18:49.611 ************************************ 00:18:49.611 00:18:49.611 real 11m41.841s 00:18:49.611 user 28m16.721s 00:18:49.611 sys 3m4.354s 00:18:49.611 21:40:34 nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:49.611 ************************************ 00:18:49.611 END TEST nvmf_tcp 00:18:49.611 ************************************ 00:18:49.611 21:40:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:49.611 21:40:34 -- spdk/autotest.sh@292 -- # [[ 1 -eq 0 ]] 00:18:49.611 21:40:34 -- spdk/autotest.sh@296 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:18:49.611 21:40:34 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:18:49.611 21:40:34 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:49.611 21:40:34 -- common/autotest_common.sh@10 -- # set +x 00:18:49.611 ************************************ 00:18:49.611 START TEST nvmf_dif 00:18:49.611 ************************************ 00:18:49.611 21:40:34 nvmf_dif -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:18:49.611 * Looking for test storage... 00:18:49.870 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:49.870 21:40:34 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:49.870 21:40:34 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:18:49.870 21:40:34 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:49.870 21:40:34 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:49.870 21:40:34 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:49.870 21:40:34 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:49.870 21:40:34 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:49.870 21:40:34 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:49.870 21:40:34 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:49.870 21:40:34 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:49.870 21:40:34 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:49.870 21:40:34 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:49.870 21:40:34 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:18:49.870 21:40:34 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:18:49.870 21:40:34 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:49.870 21:40:34 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:49.870 21:40:34 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:49.870 21:40:34 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:49.870 21:40:34 nvmf_dif -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:49.870 21:40:34 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:49.870 21:40:34 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:49.870 21:40:34 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:49.870 21:40:34 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:49.870 21:40:34 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:49.870 21:40:34 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:49.870 21:40:34 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:18:49.871 21:40:34 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:49.871 21:40:34 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:18:49.871 21:40:34 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:49.871 21:40:34 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:49.871 21:40:34 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:49.871 21:40:34 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:49.871 21:40:34 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:49.871 21:40:34 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:49.871 21:40:34 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:49.871 21:40:34 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:49.871 21:40:34 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:18:49.871 21:40:34 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:18:49.871 21:40:34 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:18:49.871 21:40:34 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:18:49.871 21:40:34 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:18:49.871 21:40:34 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:49.871 21:40:34 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:49.871 21:40:34 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:49.871 21:40:34 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:49.871 21:40:34 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:49.871 21:40:34 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:49.871 21:40:34 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:18:49.871 21:40:34 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:49.871 21:40:34 nvmf_dif -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:49.871 21:40:34 nvmf_dif -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:49.871 21:40:34 nvmf_dif -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:49.871 21:40:34 nvmf_dif -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:49.871 21:40:34 nvmf_dif -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:49.871 21:40:34 nvmf_dif -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:49.871 21:40:34 nvmf_dif -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:49.871 21:40:34 nvmf_dif -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:49.871 21:40:34 nvmf_dif -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:49.871 21:40:34 nvmf_dif -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:49.871 21:40:34 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:49.871 21:40:34 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:49.871 21:40:34 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:49.871 21:40:34 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:49.871 21:40:34 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:49.871 21:40:34 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:49.871 21:40:34 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:49.871 21:40:34 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:49.871 21:40:34 nvmf_dif -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:49.871 21:40:34 nvmf_dif -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:49.871 Cannot find device "nvmf_tgt_br" 00:18:49.871 21:40:34 nvmf_dif -- nvmf/common.sh@155 -- # true 00:18:49.871 21:40:34 nvmf_dif -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:49.871 Cannot find device "nvmf_tgt_br2" 00:18:49.871 21:40:34 nvmf_dif -- nvmf/common.sh@156 -- # true 00:18:49.871 21:40:34 nvmf_dif -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:49.871 21:40:34 nvmf_dif -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:49.871 Cannot find device "nvmf_tgt_br" 00:18:49.871 21:40:34 nvmf_dif -- nvmf/common.sh@158 -- # true 00:18:49.871 21:40:34 nvmf_dif -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:49.871 Cannot find device "nvmf_tgt_br2" 00:18:49.871 21:40:34 nvmf_dif -- nvmf/common.sh@159 -- # true 00:18:49.871 21:40:34 nvmf_dif -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:49.871 21:40:34 nvmf_dif -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:49.871 21:40:34 nvmf_dif -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:49.871 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:49.871 21:40:34 nvmf_dif -- nvmf/common.sh@162 -- # true 00:18:49.871 21:40:34 nvmf_dif -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:49.871 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:49.871 21:40:34 nvmf_dif -- nvmf/common.sh@163 -- # true 00:18:49.871 21:40:34 nvmf_dif -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:49.871 21:40:34 nvmf_dif -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:49.871 21:40:34 nvmf_dif -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:49.871 21:40:34 nvmf_dif -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:49.871 21:40:34 nvmf_dif -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:49.871 21:40:34 nvmf_dif -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:49.871 21:40:34 nvmf_dif -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:49.871 21:40:34 nvmf_dif -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:49.871 21:40:34 nvmf_dif -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:49.871 21:40:34 nvmf_dif -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:49.871 21:40:34 nvmf_dif -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:49.871 21:40:34 nvmf_dif -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:49.871 21:40:34 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:49.871 21:40:34 nvmf_dif -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:50.129 21:40:34 nvmf_dif -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:50.129 21:40:34 nvmf_dif -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:50.129 21:40:34 nvmf_dif -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:50.129 21:40:34 nvmf_dif -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:50.129 21:40:34 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:50.129 21:40:34 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:50.129 21:40:34 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:50.129 21:40:34 nvmf_dif -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:50.129 21:40:34 nvmf_dif -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:50.129 21:40:34 nvmf_dif -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:50.129 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:50.129 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:18:50.129 00:18:50.129 --- 10.0.0.2 ping statistics --- 00:18:50.129 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:50.129 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:18:50.129 21:40:34 nvmf_dif -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:50.129 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:50.129 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:18:50.129 00:18:50.129 --- 10.0.0.3 ping statistics --- 00:18:50.129 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:50.129 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:18:50.129 21:40:34 nvmf_dif -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:50.129 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:50.129 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:18:50.129 00:18:50.129 --- 10.0.0.1 ping statistics --- 00:18:50.129 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:50.129 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:18:50.129 21:40:34 nvmf_dif -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:50.129 21:40:34 nvmf_dif -- nvmf/common.sh@433 -- # return 0 00:18:50.129 21:40:34 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:18:50.129 21:40:34 nvmf_dif -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:50.387 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:50.387 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:50.387 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:50.387 21:40:35 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:50.387 21:40:35 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:50.387 21:40:35 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:50.387 21:40:35 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:50.387 21:40:35 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:50.387 21:40:35 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:50.387 21:40:35 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:18:50.387 21:40:35 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:18:50.387 21:40:35 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:50.387 21:40:35 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:50.387 21:40:35 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:18:50.387 21:40:35 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=81980 00:18:50.388 21:40:35 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:50.388 21:40:35 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 81980 00:18:50.388 21:40:35 nvmf_dif -- common/autotest_common.sh@831 -- # '[' -z 81980 ']' 00:18:50.388 21:40:35 nvmf_dif -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:50.388 21:40:35 nvmf_dif -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:50.388 21:40:35 nvmf_dif -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:50.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:50.388 21:40:35 nvmf_dif -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:50.388 21:40:35 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:18:50.646 [2024-07-24 21:40:35.428659] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:18:50.646 [2024-07-24 21:40:35.429416] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:50.646 [2024-07-24 21:40:35.569358] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:50.905 [2024-07-24 21:40:35.673516] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:50.905 [2024-07-24 21:40:35.673574] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:50.905 [2024-07-24 21:40:35.673584] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:50.905 [2024-07-24 21:40:35.673592] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:50.905 [2024-07-24 21:40:35.673599] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:50.905 [2024-07-24 21:40:35.673625] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:50.905 [2024-07-24 21:40:35.730332] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:51.510 21:40:36 nvmf_dif -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:51.510 21:40:36 nvmf_dif -- common/autotest_common.sh@864 -- # return 0 00:18:51.510 21:40:36 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:51.510 21:40:36 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:51.510 21:40:36 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:18:51.510 21:40:36 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:51.510 21:40:36 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:18:51.510 21:40:36 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:18:51.510 21:40:36 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.510 21:40:36 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:18:51.510 [2024-07-24 21:40:36.455162] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:51.510 21:40:36 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.510 21:40:36 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:18:51.510 21:40:36 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:18:51.510 21:40:36 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:51.510 21:40:36 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:18:51.510 ************************************ 00:18:51.510 START TEST fio_dif_1_default 00:18:51.510 ************************************ 00:18:51.510 21:40:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1125 -- # fio_dif_1 00:18:51.510 21:40:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:18:51.510 21:40:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:18:51.510 21:40:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:18:51.510 21:40:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:18:51.510 21:40:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:18:51.510 21:40:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:18:51.510 21:40:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.510 21:40:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:18:51.510 bdev_null0 00:18:51.510 21:40:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.510 21:40:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:18:51.510 21:40:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.510 21:40:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:18:51.510 21:40:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.510 21:40:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:18:51.510 21:40:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.510 21:40:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:18:51.510 21:40:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.510 21:40:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:51.510 21:40:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.510 21:40:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:18:51.510 [2024-07-24 21:40:36.499255] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:51.510 21:40:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.510 21:40:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:18:51.510 21:40:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:18:51.510 21:40:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:18:51.510 21:40:36 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:18:51.510 21:40:36 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:18:51.510 21:40:36 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:51.510 21:40:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:51.510 21:40:36 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:51.510 { 00:18:51.510 "params": { 00:18:51.510 "name": "Nvme$subsystem", 00:18:51.510 "trtype": "$TEST_TRANSPORT", 00:18:51.510 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:51.510 "adrfam": "ipv4", 00:18:51.510 "trsvcid": "$NVMF_PORT", 00:18:51.510 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:51.510 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:51.510 "hdgst": ${hdgst:-false}, 00:18:51.510 "ddgst": ${ddgst:-false} 00:18:51.510 }, 00:18:51.510 "method": "bdev_nvme_attach_controller" 00:18:51.510 } 00:18:51.510 EOF 00:18:51.510 )") 00:18:51.511 21:40:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:51.511 21:40:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:18:51.511 21:40:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:18:51.511 21:40:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:51.511 21:40:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:18:51.511 21:40:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:18:51.511 21:40:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:51.511 21:40:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:18:51.511 21:40:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:18:51.511 21:40:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:18:51.511 21:40:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:18:51.511 21:40:36 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:18:51.511 21:40:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:51.511 21:40:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:18:51.511 21:40:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:18:51.511 21:40:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:18:51.511 21:40:36 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:18:51.511 21:40:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:18:51.769 21:40:36 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:18:51.769 21:40:36 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:51.769 "params": { 00:18:51.769 "name": "Nvme0", 00:18:51.769 "trtype": "tcp", 00:18:51.769 "traddr": "10.0.0.2", 00:18:51.769 "adrfam": "ipv4", 00:18:51.769 "trsvcid": "4420", 00:18:51.769 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:51.769 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:18:51.769 "hdgst": false, 00:18:51.769 "ddgst": false 00:18:51.769 }, 00:18:51.769 "method": "bdev_nvme_attach_controller" 00:18:51.769 }' 00:18:51.769 21:40:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:18:51.769 21:40:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:18:51.769 21:40:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:18:51.769 21:40:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:51.769 21:40:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:18:51.769 21:40:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:18:51.769 21:40:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:18:51.769 21:40:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:18:51.769 21:40:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:51.769 21:40:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:51.769 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:18:51.769 fio-3.35 00:18:51.769 Starting 1 thread 00:19:03.977 00:19:03.977 filename0: (groupid=0, jobs=1): err= 0: pid=82052: Wed Jul 24 21:40:47 2024 00:19:03.977 read: IOPS=8537, BW=33.4MiB/s (35.0MB/s)(334MiB/10001msec) 00:19:03.977 slat (usec): min=6, max=103, avg= 8.86, stdev= 3.64 00:19:03.977 clat (usec): min=342, max=7631, avg=442.30, stdev=63.33 00:19:03.977 lat (usec): min=349, max=7641, avg=451.16, stdev=63.79 00:19:03.977 clat percentiles (usec): 00:19:03.977 | 1.00th=[ 371], 5.00th=[ 392], 10.00th=[ 404], 20.00th=[ 416], 00:19:03.977 | 30.00th=[ 424], 40.00th=[ 433], 50.00th=[ 437], 60.00th=[ 445], 00:19:03.977 | 70.00th=[ 453], 80.00th=[ 469], 90.00th=[ 486], 95.00th=[ 502], 00:19:03.977 | 99.00th=[ 537], 99.50th=[ 553], 99.90th=[ 594], 99.95th=[ 611], 00:19:03.977 | 99.99th=[ 1270] 00:19:03.977 bw ( KiB/s): min=33344, max=35200, per=100.00%, avg=34201.26, stdev=573.48, samples=19 00:19:03.977 iops : min= 8336, max= 8800, avg=8550.32, stdev=143.37, samples=19 00:19:03.977 lat (usec) : 500=94.40%, 750=5.57%, 1000=0.01% 00:19:03.977 lat (msec) : 2=0.01%, 4=0.01%, 10=0.01% 00:19:03.977 cpu : usr=84.42%, sys=13.51%, ctx=7, majf=0, minf=0 00:19:03.977 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:03.977 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:03.977 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:03.977 issued rwts: total=85388,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:03.977 latency : target=0, window=0, percentile=100.00%, depth=4 00:19:03.977 00:19:03.977 Run status group 0 (all jobs): 00:19:03.977 READ: bw=33.4MiB/s (35.0MB/s), 33.4MiB/s-33.4MiB/s (35.0MB/s-35.0MB/s), io=334MiB (350MB), run=10001-10001msec 00:19:03.977 21:40:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:19:03.977 21:40:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:19:03.977 21:40:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:19:03.977 21:40:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:19:03.977 21:40:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:19:03.977 21:40:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:03.977 21:40:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.977 21:40:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:19:03.977 21:40:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.977 21:40:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:19:03.977 21:40:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.977 21:40:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:19:03.977 21:40:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.977 ************************************ 00:19:03.977 END TEST fio_dif_1_default 00:19:03.977 ************************************ 00:19:03.977 00:19:03.977 real 0m10.998s 00:19:03.977 user 0m9.073s 00:19:03.977 sys 0m1.625s 00:19:03.977 21:40:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:03.977 21:40:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:19:03.977 21:40:47 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:19:03.977 21:40:47 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:19:03.977 21:40:47 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:03.977 21:40:47 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:03.977 ************************************ 00:19:03.977 START TEST fio_dif_1_multi_subsystems 00:19:03.977 ************************************ 00:19:03.977 21:40:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1125 -- # fio_dif_1_multi_subsystems 00:19:03.977 21:40:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:19:03.977 21:40:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:19:03.977 21:40:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:19:03.977 21:40:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:19:03.977 21:40:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:19:03.977 21:40:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:19:03.977 21:40:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:19:03.977 21:40:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.977 21:40:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:03.977 bdev_null0 00:19:03.977 21:40:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.977 21:40:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:19:03.977 21:40:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.977 21:40:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:03.977 21:40:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.977 21:40:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:19:03.977 21:40:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.977 21:40:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:03.977 21:40:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.977 21:40:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:03.977 21:40:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.977 21:40:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:03.977 [2024-07-24 21:40:47.549928] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:03.977 21:40:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.977 21:40:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:19:03.977 21:40:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:19:03.977 21:40:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:19:03.977 21:40:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:19:03.977 21:40:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.977 21:40:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:03.977 bdev_null1 00:19:03.977 21:40:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.977 21:40:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:19:03.977 21:40:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.977 21:40:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:03.977 21:40:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.977 21:40:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:19:03.977 21:40:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.977 21:40:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:03.977 21:40:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.977 21:40:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:03.978 21:40:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.978 21:40:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:03.978 21:40:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.978 21:40:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:19:03.978 21:40:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:19:03.978 21:40:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:19:03.978 21:40:47 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:19:03.978 21:40:47 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:19:03.978 21:40:47 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:03.978 21:40:47 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:03.978 { 00:19:03.978 "params": { 00:19:03.978 "name": "Nvme$subsystem", 00:19:03.978 "trtype": "$TEST_TRANSPORT", 00:19:03.978 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:03.978 "adrfam": "ipv4", 00:19:03.978 "trsvcid": "$NVMF_PORT", 00:19:03.978 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:03.978 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:03.978 "hdgst": ${hdgst:-false}, 00:19:03.978 "ddgst": ${ddgst:-false} 00:19:03.978 }, 00:19:03.978 "method": "bdev_nvme_attach_controller" 00:19:03.978 } 00:19:03.978 EOF 00:19:03.978 )") 00:19:03.978 21:40:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:03.978 21:40:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:03.978 21:40:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:19:03.978 21:40:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:19:03.978 21:40:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:19:03.978 21:40:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:03.978 21:40:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:19:03.978 21:40:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:19:03.978 21:40:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:03.978 21:40:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:19:03.978 21:40:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:19:03.978 21:40:47 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:19:03.978 21:40:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:19:03.978 21:40:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:03.978 21:40:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:19:03.978 21:40:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:19:03.978 21:40:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:19:03.978 21:40:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:19:03.978 21:40:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:19:03.978 21:40:47 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:03.978 21:40:47 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:03.978 { 00:19:03.978 "params": { 00:19:03.978 "name": "Nvme$subsystem", 00:19:03.978 "trtype": "$TEST_TRANSPORT", 00:19:03.978 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:03.978 "adrfam": "ipv4", 00:19:03.978 "trsvcid": "$NVMF_PORT", 00:19:03.978 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:03.978 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:03.978 "hdgst": ${hdgst:-false}, 00:19:03.978 "ddgst": ${ddgst:-false} 00:19:03.978 }, 00:19:03.978 "method": "bdev_nvme_attach_controller" 00:19:03.978 } 00:19:03.978 EOF 00:19:03.978 )") 00:19:03.978 21:40:47 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:19:03.978 21:40:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:19:03.978 21:40:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:19:03.978 21:40:47 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:19:03.978 21:40:47 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:19:03.978 21:40:47 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:03.978 "params": { 00:19:03.978 "name": "Nvme0", 00:19:03.978 "trtype": "tcp", 00:19:03.978 "traddr": "10.0.0.2", 00:19:03.978 "adrfam": "ipv4", 00:19:03.978 "trsvcid": "4420", 00:19:03.978 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:03.978 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:03.978 "hdgst": false, 00:19:03.978 "ddgst": false 00:19:03.978 }, 00:19:03.978 "method": "bdev_nvme_attach_controller" 00:19:03.978 },{ 00:19:03.978 "params": { 00:19:03.978 "name": "Nvme1", 00:19:03.978 "trtype": "tcp", 00:19:03.978 "traddr": "10.0.0.2", 00:19:03.978 "adrfam": "ipv4", 00:19:03.978 "trsvcid": "4420", 00:19:03.978 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:03.978 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:03.978 "hdgst": false, 00:19:03.978 "ddgst": false 00:19:03.978 }, 00:19:03.978 "method": "bdev_nvme_attach_controller" 00:19:03.978 }' 00:19:03.978 21:40:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:19:03.978 21:40:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:19:03.978 21:40:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:19:03.978 21:40:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:03.978 21:40:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:19:03.978 21:40:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:19:03.978 21:40:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:19:03.978 21:40:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:19:03.978 21:40:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:03.978 21:40:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:03.978 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:19:03.978 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:19:03.978 fio-3.35 00:19:03.978 Starting 2 threads 00:19:13.959 00:19:13.959 filename0: (groupid=0, jobs=1): err= 0: pid=82211: Wed Jul 24 21:40:58 2024 00:19:13.959 read: IOPS=4648, BW=18.2MiB/s (19.0MB/s)(182MiB/10001msec) 00:19:13.959 slat (nsec): min=6732, max=64390, avg=13823.98, stdev=4032.67 00:19:13.959 clat (usec): min=675, max=1543, avg=822.00, stdev=46.02 00:19:13.959 lat (usec): min=688, max=1571, avg=835.83, stdev=46.48 00:19:13.959 clat percentiles (usec): 00:19:13.959 | 1.00th=[ 742], 5.00th=[ 758], 10.00th=[ 766], 20.00th=[ 783], 00:19:13.959 | 30.00th=[ 799], 40.00th=[ 807], 50.00th=[ 816], 60.00th=[ 824], 00:19:13.959 | 70.00th=[ 840], 80.00th=[ 857], 90.00th=[ 881], 95.00th=[ 906], 00:19:13.959 | 99.00th=[ 947], 99.50th=[ 963], 99.90th=[ 1004], 99.95th=[ 1029], 00:19:13.959 | 99.99th=[ 1237] 00:19:13.959 bw ( KiB/s): min=18272, max=18880, per=50.01%, avg=18601.79, stdev=138.05, samples=19 00:19:13.959 iops : min= 4568, max= 4720, avg=4650.42, stdev=34.53, samples=19 00:19:13.959 lat (usec) : 750=2.57%, 1000=97.30% 00:19:13.959 lat (msec) : 2=0.13% 00:19:13.959 cpu : usr=89.64%, sys=8.99%, ctx=18, majf=0, minf=9 00:19:13.959 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:13.959 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:13.959 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:13.959 issued rwts: total=46492,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:13.959 latency : target=0, window=0, percentile=100.00%, depth=4 00:19:13.959 filename1: (groupid=0, jobs=1): err= 0: pid=82212: Wed Jul 24 21:40:58 2024 00:19:13.959 read: IOPS=4649, BW=18.2MiB/s (19.0MB/s)(182MiB/10001msec) 00:19:13.959 slat (nsec): min=6909, max=82005, avg=13504.60, stdev=3952.15 00:19:13.959 clat (usec): min=472, max=1519, avg=824.19, stdev=52.02 00:19:13.959 lat (usec): min=481, max=1545, avg=837.70, stdev=52.69 00:19:13.959 clat percentiles (usec): 00:19:13.959 | 1.00th=[ 709], 5.00th=[ 742], 10.00th=[ 766], 20.00th=[ 783], 00:19:13.959 | 30.00th=[ 799], 40.00th=[ 807], 50.00th=[ 824], 60.00th=[ 832], 00:19:13.959 | 70.00th=[ 848], 80.00th=[ 865], 90.00th=[ 889], 95.00th=[ 914], 00:19:13.959 | 99.00th=[ 963], 99.50th=[ 979], 99.90th=[ 1012], 99.95th=[ 1037], 00:19:13.959 | 99.99th=[ 1106] 00:19:13.959 bw ( KiB/s): min=18272, max=18880, per=50.02%, avg=18603.47, stdev=137.22, samples=19 00:19:13.959 iops : min= 4568, max= 4720, avg=4650.84, stdev=34.31, samples=19 00:19:13.959 lat (usec) : 500=0.01%, 750=6.61%, 1000=93.20% 00:19:13.959 lat (msec) : 2=0.18% 00:19:13.959 cpu : usr=89.75%, sys=8.92%, ctx=21, majf=0, minf=0 00:19:13.959 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:13.959 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:13.959 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:13.959 issued rwts: total=46496,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:13.959 latency : target=0, window=0, percentile=100.00%, depth=4 00:19:13.959 00:19:13.959 Run status group 0 (all jobs): 00:19:13.959 READ: bw=36.3MiB/s (38.1MB/s), 18.2MiB/s-18.2MiB/s (19.0MB/s-19.0MB/s), io=363MiB (381MB), run=10001-10001msec 00:19:13.959 21:40:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:19:13.959 21:40:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:19:13.959 21:40:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:19:13.959 21:40:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:19:13.959 21:40:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:19:13.959 21:40:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:13.959 21:40:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.959 21:40:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:13.959 21:40:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.959 21:40:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:19:13.959 21:40:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.959 21:40:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:13.959 21:40:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.959 21:40:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:19:13.959 21:40:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:19:13.959 21:40:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:19:13.959 21:40:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:13.959 21:40:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.959 21:40:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:13.959 21:40:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.959 21:40:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:19:13.959 21:40:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.959 21:40:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:13.959 21:40:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.959 ************************************ 00:19:13.959 END TEST fio_dif_1_multi_subsystems 00:19:13.959 ************************************ 00:19:13.959 00:19:13.959 real 0m11.118s 00:19:13.959 user 0m18.673s 00:19:13.959 sys 0m2.085s 00:19:13.959 21:40:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:13.959 21:40:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:13.959 21:40:58 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:19:13.959 21:40:58 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:19:13.959 21:40:58 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:13.959 21:40:58 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:13.959 ************************************ 00:19:13.959 START TEST fio_dif_rand_params 00:19:13.959 ************************************ 00:19:13.959 21:40:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1125 -- # fio_dif_rand_params 00:19:13.959 21:40:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:19:13.959 21:40:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:19:13.959 21:40:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:19:13.959 21:40:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:19:13.959 21:40:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:19:13.959 21:40:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:19:13.959 21:40:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:19:13.959 21:40:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:19:13.959 21:40:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:19:13.959 21:40:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:19:13.959 21:40:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:19:13.959 21:40:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:19:13.959 21:40:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:19:13.959 21:40:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.959 21:40:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:13.959 bdev_null0 00:19:13.959 21:40:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.959 21:40:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:19:13.959 21:40:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.959 21:40:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:13.959 21:40:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.959 21:40:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:19:13.959 21:40:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.959 21:40:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:13.959 21:40:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.959 21:40:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:13.959 21:40:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.959 21:40:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:13.959 [2024-07-24 21:40:58.717585] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:13.959 21:40:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.959 21:40:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:19:13.959 21:40:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:19:13.959 21:40:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:19:13.959 21:40:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:13.959 21:40:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:13.959 21:40:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:19:13.959 21:40:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:19:13.959 21:40:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:19:13.960 21:40:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:13.960 21:40:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:19:13.960 21:40:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:19:13.960 21:40:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:13.960 21:40:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:13.960 21:40:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:19:13.960 21:40:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:19:13.960 21:40:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:13.960 { 00:19:13.960 "params": { 00:19:13.960 "name": "Nvme$subsystem", 00:19:13.960 "trtype": "$TEST_TRANSPORT", 00:19:13.960 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:13.960 "adrfam": "ipv4", 00:19:13.960 "trsvcid": "$NVMF_PORT", 00:19:13.960 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:13.960 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:13.960 "hdgst": ${hdgst:-false}, 00:19:13.960 "ddgst": ${ddgst:-false} 00:19:13.960 }, 00:19:13.960 "method": "bdev_nvme_attach_controller" 00:19:13.960 } 00:19:13.960 EOF 00:19:13.960 )") 00:19:13.960 21:40:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:19:13.960 21:40:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:19:13.960 21:40:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:19:13.960 21:40:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:19:13.960 21:40:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:19:13.960 21:40:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:13.960 21:40:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:19:13.960 21:40:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:19:13.960 21:40:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:19:13.960 21:40:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:19:13.960 21:40:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:19:13.960 21:40:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:13.960 "params": { 00:19:13.960 "name": "Nvme0", 00:19:13.960 "trtype": "tcp", 00:19:13.960 "traddr": "10.0.0.2", 00:19:13.960 "adrfam": "ipv4", 00:19:13.960 "trsvcid": "4420", 00:19:13.960 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:13.960 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:13.960 "hdgst": false, 00:19:13.960 "ddgst": false 00:19:13.960 }, 00:19:13.960 "method": "bdev_nvme_attach_controller" 00:19:13.960 }' 00:19:13.960 21:40:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:19:13.960 21:40:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:19:13.960 21:40:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:19:13.960 21:40:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:13.960 21:40:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:19:13.960 21:40:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:19:13.960 21:40:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:19:13.960 21:40:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:19:13.960 21:40:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:13.960 21:40:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:13.960 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:19:13.960 ... 00:19:13.960 fio-3.35 00:19:13.960 Starting 3 threads 00:19:20.528 00:19:20.528 filename0: (groupid=0, jobs=1): err= 0: pid=82368: Wed Jul 24 21:41:04 2024 00:19:20.528 read: IOPS=255, BW=31.9MiB/s (33.5MB/s)(160MiB/5001msec) 00:19:20.528 slat (nsec): min=7642, max=54266, avg=14252.08, stdev=3682.78 00:19:20.528 clat (usec): min=9126, max=12368, avg=11707.56, stdev=228.07 00:19:20.528 lat (usec): min=9136, max=12384, avg=11721.81, stdev=228.15 00:19:20.528 clat percentiles (usec): 00:19:20.528 | 1.00th=[11207], 5.00th=[11338], 10.00th=[11469], 20.00th=[11600], 00:19:20.528 | 30.00th=[11600], 40.00th=[11731], 50.00th=[11731], 60.00th=[11731], 00:19:20.528 | 70.00th=[11863], 80.00th=[11863], 90.00th=[11863], 95.00th=[11994], 00:19:20.528 | 99.00th=[12125], 99.50th=[12256], 99.90th=[12387], 99.95th=[12387], 00:19:20.528 | 99.99th=[12387] 00:19:20.528 bw ( KiB/s): min=32256, max=33024, per=33.34%, avg=32682.67, stdev=404.77, samples=9 00:19:20.528 iops : min= 252, max= 258, avg=255.33, stdev= 3.16, samples=9 00:19:20.528 lat (msec) : 10=0.23%, 20=99.77% 00:19:20.528 cpu : usr=91.08%, sys=8.24%, ctx=3, majf=0, minf=0 00:19:20.528 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:20.528 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:20.528 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:20.528 issued rwts: total=1278,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:20.528 latency : target=0, window=0, percentile=100.00%, depth=3 00:19:20.528 filename0: (groupid=0, jobs=1): err= 0: pid=82369: Wed Jul 24 21:41:04 2024 00:19:20.528 read: IOPS=255, BW=32.0MiB/s (33.5MB/s)(160MiB/5007msec) 00:19:20.528 slat (nsec): min=6117, max=40009, avg=9953.51, stdev=3456.09 00:19:20.528 clat (usec): min=7376, max=12386, avg=11699.74, stdev=347.88 00:19:20.528 lat (usec): min=7383, max=12398, avg=11709.70, stdev=347.90 00:19:20.528 clat percentiles (usec): 00:19:20.528 | 1.00th=[11207], 5.00th=[11338], 10.00th=[11469], 20.00th=[11600], 00:19:20.528 | 30.00th=[11600], 40.00th=[11731], 50.00th=[11731], 60.00th=[11863], 00:19:20.528 | 70.00th=[11863], 80.00th=[11863], 90.00th=[11863], 95.00th=[11994], 00:19:20.528 | 99.00th=[12256], 99.50th=[12256], 99.90th=[12387], 99.95th=[12387], 00:19:20.528 | 99.99th=[12387] 00:19:20.528 bw ( KiB/s): min=32256, max=33024, per=33.37%, avg=32716.80, stdev=396.59, samples=10 00:19:20.528 iops : min= 252, max= 258, avg=255.60, stdev= 3.10, samples=10 00:19:20.528 lat (msec) : 10=0.47%, 20=99.53% 00:19:20.528 cpu : usr=91.13%, sys=8.05%, ctx=70, majf=0, minf=0 00:19:20.529 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:20.529 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:20.529 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:20.529 issued rwts: total=1281,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:20.529 latency : target=0, window=0, percentile=100.00%, depth=3 00:19:20.529 filename0: (groupid=0, jobs=1): err= 0: pid=82370: Wed Jul 24 21:41:04 2024 00:19:20.529 read: IOPS=255, BW=31.9MiB/s (33.4MB/s)(160MiB/5003msec) 00:19:20.529 slat (nsec): min=7749, max=39949, avg=13950.61, stdev=2825.49 00:19:20.529 clat (usec): min=2980, max=20025, avg=11727.18, stdev=508.08 00:19:20.529 lat (usec): min=2992, max=20049, avg=11741.14, stdev=508.15 00:19:20.529 clat percentiles (usec): 00:19:20.529 | 1.00th=[11207], 5.00th=[11338], 10.00th=[11469], 20.00th=[11600], 00:19:20.529 | 30.00th=[11600], 40.00th=[11731], 50.00th=[11731], 60.00th=[11731], 00:19:20.529 | 70.00th=[11863], 80.00th=[11863], 90.00th=[11863], 95.00th=[11994], 00:19:20.529 | 99.00th=[12256], 99.50th=[12256], 99.90th=[20055], 99.95th=[20055], 00:19:20.529 | 99.99th=[20055] 00:19:20.529 bw ( KiB/s): min=31551, max=33024, per=33.26%, avg=32604.33, stdev=542.46, samples=9 00:19:20.529 iops : min= 246, max= 258, avg=254.67, stdev= 4.36, samples=9 00:19:20.529 lat (msec) : 4=0.08%, 20=99.69%, 50=0.24% 00:19:20.529 cpu : usr=90.92%, sys=8.60%, ctx=7, majf=0, minf=0 00:19:20.529 IO depths : 1=33.4%, 2=66.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:20.529 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:20.529 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:20.529 issued rwts: total=1276,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:20.529 latency : target=0, window=0, percentile=100.00%, depth=3 00:19:20.529 00:19:20.529 Run status group 0 (all jobs): 00:19:20.529 READ: bw=95.7MiB/s (100MB/s), 31.9MiB/s-32.0MiB/s (33.4MB/s-33.5MB/s), io=479MiB (503MB), run=5001-5007msec 00:19:20.529 21:41:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:19:20.529 21:41:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:19:20.529 21:41:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:19:20.529 21:41:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:19:20.529 21:41:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:19:20.529 21:41:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:20.529 21:41:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.529 21:41:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:20.529 21:41:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.529 21:41:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:19:20.529 21:41:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.529 21:41:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:20.529 21:41:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.529 21:41:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:19:20.529 21:41:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:19:20.529 21:41:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:19:20.529 21:41:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:19:20.529 21:41:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:19:20.529 21:41:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:19:20.529 21:41:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:19:20.529 21:41:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:19:20.529 21:41:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:19:20.529 21:41:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:19:20.529 21:41:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:19:20.529 21:41:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:19:20.529 21:41:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.529 21:41:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:20.529 bdev_null0 00:19:20.529 21:41:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.529 21:41:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:19:20.529 21:41:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.529 21:41:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:20.529 21:41:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.529 21:41:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:19:20.529 21:41:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.529 21:41:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:20.529 21:41:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.529 21:41:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:20.529 21:41:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.529 21:41:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:20.529 [2024-07-24 21:41:04.702828] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:20.529 21:41:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.529 21:41:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:19:20.529 21:41:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:19:20.529 21:41:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:19:20.529 21:41:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:19:20.529 21:41:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.529 21:41:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:20.529 bdev_null1 00:19:20.529 21:41:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.529 21:41:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:19:20.529 21:41:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.529 21:41:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:20.529 21:41:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.529 21:41:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:19:20.529 21:41:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.529 21:41:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:20.529 21:41:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.529 21:41:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:20.529 21:41:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.529 21:41:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:20.529 21:41:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.529 21:41:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:19:20.529 21:41:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:19:20.529 21:41:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:19:20.530 21:41:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:19:20.530 21:41:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.530 21:41:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:20.530 bdev_null2 00:19:20.530 21:41:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.530 21:41:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:19:20.530 21:41:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.530 21:41:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:20.530 21:41:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.530 21:41:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:19:20.530 21:41:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.530 21:41:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:20.530 21:41:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.530 21:41:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:19:20.530 21:41:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.530 21:41:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:20.530 21:41:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.530 21:41:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:19:20.530 21:41:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:20.530 21:41:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:19:20.530 21:41:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:20.530 21:41:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:19:20.530 21:41:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:19:20.530 21:41:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:20.530 21:41:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:19:20.530 21:41:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:20.530 21:41:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:19:20.530 21:41:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:19:20.530 21:41:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:19:20.530 21:41:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:19:20.530 21:41:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:19:20.530 21:41:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:19:20.530 21:41:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:19:20.530 21:41:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:20.530 21:41:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:19:20.530 21:41:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:20.530 { 00:19:20.530 "params": { 00:19:20.530 "name": "Nvme$subsystem", 00:19:20.530 "trtype": "$TEST_TRANSPORT", 00:19:20.530 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:20.530 "adrfam": "ipv4", 00:19:20.530 "trsvcid": "$NVMF_PORT", 00:19:20.530 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:20.530 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:20.530 "hdgst": ${hdgst:-false}, 00:19:20.530 "ddgst": ${ddgst:-false} 00:19:20.530 }, 00:19:20.530 "method": "bdev_nvme_attach_controller" 00:19:20.530 } 00:19:20.530 EOF 00:19:20.530 )") 00:19:20.530 21:41:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:20.530 21:41:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:19:20.530 21:41:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:19:20.530 21:41:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:19:20.530 21:41:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:19:20.530 21:41:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:19:20.530 21:41:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:20.530 21:41:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:19:20.530 21:41:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:20.530 { 00:19:20.530 "params": { 00:19:20.530 "name": "Nvme$subsystem", 00:19:20.530 "trtype": "$TEST_TRANSPORT", 00:19:20.530 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:20.530 "adrfam": "ipv4", 00:19:20.530 "trsvcid": "$NVMF_PORT", 00:19:20.530 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:20.530 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:20.530 "hdgst": ${hdgst:-false}, 00:19:20.530 "ddgst": ${ddgst:-false} 00:19:20.530 }, 00:19:20.530 "method": "bdev_nvme_attach_controller" 00:19:20.530 } 00:19:20.530 EOF 00:19:20.530 )") 00:19:20.530 21:41:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:19:20.530 21:41:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:19:20.530 21:41:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:19:20.530 21:41:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:20.530 21:41:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:19:20.530 21:41:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:20.530 { 00:19:20.530 "params": { 00:19:20.530 "name": "Nvme$subsystem", 00:19:20.530 "trtype": "$TEST_TRANSPORT", 00:19:20.530 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:20.530 "adrfam": "ipv4", 00:19:20.530 "trsvcid": "$NVMF_PORT", 00:19:20.530 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:20.530 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:20.530 "hdgst": ${hdgst:-false}, 00:19:20.530 "ddgst": ${ddgst:-false} 00:19:20.530 }, 00:19:20.530 "method": "bdev_nvme_attach_controller" 00:19:20.530 } 00:19:20.530 EOF 00:19:20.530 )") 00:19:20.530 21:41:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:19:20.530 21:41:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:19:20.530 21:41:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:19:20.530 21:41:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:19:20.530 21:41:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:19:20.530 21:41:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:20.530 "params": { 00:19:20.530 "name": "Nvme0", 00:19:20.530 "trtype": "tcp", 00:19:20.530 "traddr": "10.0.0.2", 00:19:20.530 "adrfam": "ipv4", 00:19:20.530 "trsvcid": "4420", 00:19:20.530 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:20.530 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:20.530 "hdgst": false, 00:19:20.530 "ddgst": false 00:19:20.530 }, 00:19:20.530 "method": "bdev_nvme_attach_controller" 00:19:20.530 },{ 00:19:20.530 "params": { 00:19:20.530 "name": "Nvme1", 00:19:20.530 "trtype": "tcp", 00:19:20.530 "traddr": "10.0.0.2", 00:19:20.530 "adrfam": "ipv4", 00:19:20.530 "trsvcid": "4420", 00:19:20.530 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:20.530 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:20.530 "hdgst": false, 00:19:20.530 "ddgst": false 00:19:20.530 }, 00:19:20.530 "method": "bdev_nvme_attach_controller" 00:19:20.530 },{ 00:19:20.530 "params": { 00:19:20.530 "name": "Nvme2", 00:19:20.530 "trtype": "tcp", 00:19:20.530 "traddr": "10.0.0.2", 00:19:20.530 "adrfam": "ipv4", 00:19:20.530 "trsvcid": "4420", 00:19:20.530 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:20.530 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:20.530 "hdgst": false, 00:19:20.530 "ddgst": false 00:19:20.530 }, 00:19:20.530 "method": "bdev_nvme_attach_controller" 00:19:20.530 }' 00:19:20.530 21:41:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:19:20.530 21:41:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:19:20.530 21:41:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:19:20.530 21:41:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:19:20.530 21:41:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:20.530 21:41:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:19:20.530 21:41:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:19:20.530 21:41:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:19:20.530 21:41:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:20.530 21:41:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:20.530 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:19:20.530 ... 00:19:20.530 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:19:20.530 ... 00:19:20.530 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:19:20.530 ... 00:19:20.530 fio-3.35 00:19:20.530 Starting 24 threads 00:19:32.732 00:19:32.732 filename0: (groupid=0, jobs=1): err= 0: pid=82466: Wed Jul 24 21:41:15 2024 00:19:32.732 read: IOPS=217, BW=870KiB/s (891kB/s)(8724KiB/10027msec) 00:19:32.732 slat (usec): min=7, max=8025, avg=28.27, stdev=342.87 00:19:32.732 clat (msec): min=35, max=144, avg=73.37, stdev=19.94 00:19:32.732 lat (msec): min=35, max=144, avg=73.39, stdev=19.94 00:19:32.732 clat percentiles (msec): 00:19:32.732 | 1.00th=[ 38], 5.00th=[ 48], 10.00th=[ 48], 20.00th=[ 52], 00:19:32.732 | 30.00th=[ 63], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 73], 00:19:32.732 | 70.00th=[ 84], 80.00th=[ 85], 90.00th=[ 108], 95.00th=[ 110], 00:19:32.732 | 99.00th=[ 121], 99.50th=[ 122], 99.90th=[ 131], 99.95th=[ 144], 00:19:32.732 | 99.99th=[ 144] 00:19:32.732 bw ( KiB/s): min= 664, max= 1024, per=4.22%, avg=866.70, stdev=104.63, samples=20 00:19:32.732 iops : min= 166, max= 256, avg=216.65, stdev=26.14, samples=20 00:19:32.732 lat (msec) : 50=18.29%, 100=70.24%, 250=11.46% 00:19:32.732 cpu : usr=31.16%, sys=2.05%, ctx=849, majf=0, minf=9 00:19:32.732 IO depths : 1=0.1%, 2=1.1%, 4=4.4%, 8=79.3%, 16=15.2%, 32=0.0%, >=64=0.0% 00:19:32.732 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:32.732 complete : 0=0.0%, 4=88.0%, 8=11.1%, 16=1.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:32.732 issued rwts: total=2181,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:32.732 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:32.732 filename0: (groupid=0, jobs=1): err= 0: pid=82467: Wed Jul 24 21:41:15 2024 00:19:32.732 read: IOPS=213, BW=854KiB/s (874kB/s)(8576KiB/10045msec) 00:19:32.732 slat (usec): min=3, max=8028, avg=21.70, stdev=221.77 00:19:32.732 clat (msec): min=2, max=131, avg=74.71, stdev=21.87 00:19:32.732 lat (msec): min=2, max=131, avg=74.73, stdev=21.87 00:19:32.732 clat percentiles (msec): 00:19:32.732 | 1.00th=[ 10], 5.00th=[ 45], 10.00th=[ 50], 20.00th=[ 60], 00:19:32.732 | 30.00th=[ 66], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 75], 00:19:32.732 | 70.00th=[ 81], 80.00th=[ 92], 90.00th=[ 109], 95.00th=[ 114], 00:19:32.732 | 99.00th=[ 122], 99.50th=[ 123], 99.90th=[ 129], 99.95th=[ 132], 00:19:32.732 | 99.99th=[ 132] 00:19:32.732 bw ( KiB/s): min= 616, max= 1280, per=4.15%, avg=851.20, stdev=135.10, samples=20 00:19:32.732 iops : min= 154, max= 320, avg=212.80, stdev=33.78, samples=20 00:19:32.732 lat (msec) : 4=0.09%, 10=1.31%, 20=0.75%, 50=8.86%, 100=71.88% 00:19:32.732 lat (msec) : 250=17.12% 00:19:32.732 cpu : usr=42.97%, sys=2.41%, ctx=1434, majf=0, minf=9 00:19:32.732 IO depths : 1=0.1%, 2=2.2%, 4=8.7%, 8=74.0%, 16=15.1%, 32=0.0%, >=64=0.0% 00:19:32.732 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:32.732 complete : 0=0.0%, 4=89.7%, 8=8.4%, 16=1.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:32.732 issued rwts: total=2144,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:32.732 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:32.732 filename0: (groupid=0, jobs=1): err= 0: pid=82468: Wed Jul 24 21:41:15 2024 00:19:32.732 read: IOPS=212, BW=850KiB/s (870kB/s)(8528KiB/10034msec) 00:19:32.732 slat (usec): min=4, max=8024, avg=22.29, stdev=260.19 00:19:32.732 clat (msec): min=35, max=126, avg=75.19, stdev=20.20 00:19:32.732 lat (msec): min=35, max=126, avg=75.21, stdev=20.21 00:19:32.732 clat percentiles (msec): 00:19:32.732 | 1.00th=[ 41], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 61], 00:19:32.732 | 30.00th=[ 64], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 75], 00:19:32.732 | 70.00th=[ 83], 80.00th=[ 91], 90.00th=[ 108], 95.00th=[ 116], 00:19:32.732 | 99.00th=[ 122], 99.50th=[ 126], 99.90th=[ 127], 99.95th=[ 127], 00:19:32.732 | 99.99th=[ 127] 00:19:32.732 bw ( KiB/s): min= 648, max= 1010, per=4.13%, avg=846.10, stdev=100.96, samples=20 00:19:32.732 iops : min= 162, max= 252, avg=211.50, stdev=25.20, samples=20 00:19:32.732 lat (msec) : 50=12.34%, 100=73.64%, 250=14.02% 00:19:32.732 cpu : usr=38.61%, sys=2.35%, ctx=1284, majf=0, minf=9 00:19:32.732 IO depths : 1=0.1%, 2=1.0%, 4=3.9%, 8=79.2%, 16=15.9%, 32=0.0%, >=64=0.0% 00:19:32.732 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:32.732 complete : 0=0.0%, 4=88.4%, 8=10.7%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:32.732 issued rwts: total=2132,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:32.732 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:32.732 filename0: (groupid=0, jobs=1): err= 0: pid=82469: Wed Jul 24 21:41:15 2024 00:19:32.732 read: IOPS=206, BW=825KiB/s (844kB/s)(8292KiB/10056msec) 00:19:32.732 slat (usec): min=5, max=12020, avg=29.62, stdev=395.81 00:19:32.732 clat (msec): min=3, max=149, avg=77.25, stdev=21.53 00:19:32.732 lat (msec): min=3, max=149, avg=77.28, stdev=21.51 00:19:32.732 clat percentiles (msec): 00:19:32.732 | 1.00th=[ 10], 5.00th=[ 48], 10.00th=[ 50], 20.00th=[ 62], 00:19:32.732 | 30.00th=[ 69], 40.00th=[ 72], 50.00th=[ 73], 60.00th=[ 83], 00:19:32.732 | 70.00th=[ 85], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 118], 00:19:32.732 | 99.00th=[ 121], 99.50th=[ 124], 99.90th=[ 140], 99.95th=[ 144], 00:19:32.732 | 99.99th=[ 150] 00:19:32.732 bw ( KiB/s): min= 584, max= 1264, per=4.01%, avg=822.80, stdev=131.92, samples=20 00:19:32.732 iops : min= 146, max= 316, avg=205.70, stdev=32.98, samples=20 00:19:32.732 lat (msec) : 4=0.34%, 10=0.72%, 20=0.39%, 50=9.02%, 100=74.82% 00:19:32.732 lat (msec) : 250=14.71% 00:19:32.732 cpu : usr=32.21%, sys=2.14%, ctx=985, majf=0, minf=9 00:19:32.732 IO depths : 1=0.1%, 2=1.6%, 4=6.6%, 8=75.7%, 16=16.0%, 32=0.0%, >=64=0.0% 00:19:32.732 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:32.732 complete : 0=0.0%, 4=89.6%, 8=9.0%, 16=1.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:32.732 issued rwts: total=2073,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:32.732 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:32.732 filename0: (groupid=0, jobs=1): err= 0: pid=82470: Wed Jul 24 21:41:15 2024 00:19:32.732 read: IOPS=224, BW=899KiB/s (920kB/s)(9004KiB/10017msec) 00:19:32.732 slat (usec): min=4, max=6118, avg=21.66, stdev=210.02 00:19:32.732 clat (msec): min=20, max=131, avg=71.08, stdev=20.86 00:19:32.732 lat (msec): min=20, max=131, avg=71.11, stdev=20.85 00:19:32.732 clat percentiles (msec): 00:19:32.732 | 1.00th=[ 29], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 50], 00:19:32.732 | 30.00th=[ 60], 40.00th=[ 66], 50.00th=[ 72], 60.00th=[ 73], 00:19:32.732 | 70.00th=[ 79], 80.00th=[ 86], 90.00th=[ 105], 95.00th=[ 113], 00:19:32.732 | 99.00th=[ 121], 99.50th=[ 122], 99.90th=[ 125], 99.95th=[ 125], 00:19:32.732 | 99.99th=[ 132] 00:19:32.732 bw ( KiB/s): min= 664, max= 1024, per=4.37%, avg=895.70, stdev=100.47, samples=20 00:19:32.732 iops : min= 166, max= 256, avg=223.90, stdev=25.09, samples=20 00:19:32.733 lat (msec) : 50=20.48%, 100=68.19%, 250=11.33% 00:19:32.733 cpu : usr=36.74%, sys=2.66%, ctx=1302, majf=0, minf=9 00:19:32.733 IO depths : 1=0.1%, 2=0.2%, 4=0.7%, 8=83.2%, 16=15.9%, 32=0.0%, >=64=0.0% 00:19:32.733 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:32.733 complete : 0=0.0%, 4=87.0%, 8=12.8%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:32.733 issued rwts: total=2251,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:32.733 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:32.733 filename0: (groupid=0, jobs=1): err= 0: pid=82471: Wed Jul 24 21:41:15 2024 00:19:32.733 read: IOPS=214, BW=859KiB/s (879kB/s)(8612KiB/10028msec) 00:19:32.733 slat (usec): min=4, max=8028, avg=23.28, stdev=258.98 00:19:32.733 clat (msec): min=33, max=143, avg=74.33, stdev=21.28 00:19:32.733 lat (msec): min=33, max=143, avg=74.36, stdev=21.27 00:19:32.733 clat percentiles (msec): 00:19:32.733 | 1.00th=[ 37], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 56], 00:19:32.733 | 30.00th=[ 62], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 75], 00:19:32.733 | 70.00th=[ 82], 80.00th=[ 94], 90.00th=[ 108], 95.00th=[ 117], 00:19:32.733 | 99.00th=[ 125], 99.50th=[ 131], 99.90th=[ 144], 99.95th=[ 144], 00:19:32.733 | 99.99th=[ 144] 00:19:32.733 bw ( KiB/s): min= 608, max= 1000, per=4.16%, avg=852.68, stdev=115.23, samples=19 00:19:32.733 iops : min= 152, max= 250, avg=213.16, stdev=28.82, samples=19 00:19:32.733 lat (msec) : 50=15.10%, 100=70.97%, 250=13.93% 00:19:32.733 cpu : usr=36.46%, sys=2.30%, ctx=1085, majf=0, minf=9 00:19:32.733 IO depths : 1=0.1%, 2=0.4%, 4=1.6%, 8=81.6%, 16=16.3%, 32=0.0%, >=64=0.0% 00:19:32.733 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:32.733 complete : 0=0.0%, 4=87.8%, 8=11.8%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:32.733 issued rwts: total=2153,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:32.733 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:32.733 filename0: (groupid=0, jobs=1): err= 0: pid=82472: Wed Jul 24 21:41:15 2024 00:19:32.733 read: IOPS=209, BW=840KiB/s (860kB/s)(8412KiB/10018msec) 00:19:32.733 slat (usec): min=4, max=8037, avg=25.40, stdev=270.24 00:19:32.733 clat (msec): min=22, max=150, avg=76.06, stdev=20.28 00:19:32.733 lat (msec): min=22, max=150, avg=76.09, stdev=20.28 00:19:32.733 clat percentiles (msec): 00:19:32.733 | 1.00th=[ 37], 5.00th=[ 48], 10.00th=[ 50], 20.00th=[ 63], 00:19:32.733 | 30.00th=[ 68], 40.00th=[ 71], 50.00th=[ 73], 60.00th=[ 78], 00:19:32.733 | 70.00th=[ 83], 80.00th=[ 92], 90.00th=[ 107], 95.00th=[ 115], 00:19:32.733 | 99.00th=[ 125], 99.50th=[ 150], 99.90th=[ 150], 99.95th=[ 150], 00:19:32.733 | 99.99th=[ 150] 00:19:32.733 bw ( KiB/s): min= 640, max= 986, per=4.08%, avg=836.50, stdev=113.11, samples=20 00:19:32.733 iops : min= 160, max= 246, avg=209.10, stdev=28.24, samples=20 00:19:32.733 lat (msec) : 50=12.13%, 100=74.42%, 250=13.46% 00:19:32.733 cpu : usr=40.83%, sys=2.68%, ctx=1295, majf=0, minf=9 00:19:32.733 IO depths : 1=0.1%, 2=2.5%, 4=10.0%, 8=73.0%, 16=14.5%, 32=0.0%, >=64=0.0% 00:19:32.733 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:32.733 complete : 0=0.0%, 4=89.7%, 8=8.1%, 16=2.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:32.733 issued rwts: total=2103,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:32.733 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:32.733 filename0: (groupid=0, jobs=1): err= 0: pid=82473: Wed Jul 24 21:41:15 2024 00:19:32.733 read: IOPS=206, BW=825KiB/s (845kB/s)(8272KiB/10021msec) 00:19:32.733 slat (usec): min=4, max=6024, avg=18.87, stdev=160.59 00:19:32.733 clat (msec): min=20, max=144, avg=77.42, stdev=20.88 00:19:32.733 lat (msec): min=20, max=144, avg=77.44, stdev=20.89 00:19:32.733 clat percentiles (msec): 00:19:32.733 | 1.00th=[ 36], 5.00th=[ 48], 10.00th=[ 48], 20.00th=[ 61], 00:19:32.733 | 30.00th=[ 70], 40.00th=[ 72], 50.00th=[ 73], 60.00th=[ 81], 00:19:32.733 | 70.00th=[ 84], 80.00th=[ 94], 90.00th=[ 109], 95.00th=[ 120], 00:19:32.733 | 99.00th=[ 129], 99.50th=[ 132], 99.90th=[ 144], 99.95th=[ 144], 00:19:32.733 | 99.99th=[ 144] 00:19:32.733 bw ( KiB/s): min= 640, max= 978, per=4.01%, avg=822.25, stdev=111.36, samples=20 00:19:32.733 iops : min= 160, max= 244, avg=205.50, stdev=27.83, samples=20 00:19:32.733 lat (msec) : 50=13.39%, 100=70.16%, 250=16.44% 00:19:32.733 cpu : usr=34.35%, sys=2.14%, ctx=999, majf=0, minf=9 00:19:32.733 IO depths : 1=0.1%, 2=2.9%, 4=11.4%, 8=71.5%, 16=14.3%, 32=0.0%, >=64=0.0% 00:19:32.733 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:32.733 complete : 0=0.0%, 4=90.1%, 8=7.4%, 16=2.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:32.733 issued rwts: total=2068,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:32.733 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:32.733 filename1: (groupid=0, jobs=1): err= 0: pid=82474: Wed Jul 24 21:41:15 2024 00:19:32.733 read: IOPS=215, BW=861KiB/s (882kB/s)(8644KiB/10038msec) 00:19:32.733 slat (usec): min=7, max=8025, avg=27.66, stdev=344.49 00:19:32.733 clat (msec): min=24, max=143, avg=74.17, stdev=20.49 00:19:32.733 lat (msec): min=24, max=143, avg=74.20, stdev=20.49 00:19:32.733 clat percentiles (msec): 00:19:32.733 | 1.00th=[ 37], 5.00th=[ 48], 10.00th=[ 48], 20.00th=[ 59], 00:19:32.733 | 30.00th=[ 61], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 73], 00:19:32.733 | 70.00th=[ 84], 80.00th=[ 87], 90.00th=[ 108], 95.00th=[ 112], 00:19:32.733 | 99.00th=[ 121], 99.50th=[ 121], 99.90th=[ 132], 99.95th=[ 144], 00:19:32.733 | 99.99th=[ 144] 00:19:32.733 bw ( KiB/s): min= 672, max= 992, per=4.18%, avg=857.30, stdev=93.31, samples=20 00:19:32.733 iops : min= 168, max= 248, avg=214.30, stdev=23.31, samples=20 00:19:32.733 lat (msec) : 50=16.01%, 100=70.34%, 250=13.65% 00:19:32.733 cpu : usr=31.30%, sys=1.98%, ctx=849, majf=0, minf=9 00:19:32.733 IO depths : 1=0.1%, 2=0.2%, 4=0.7%, 8=82.5%, 16=16.6%, 32=0.0%, >=64=0.0% 00:19:32.733 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:32.733 complete : 0=0.0%, 4=87.7%, 8=12.1%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:32.733 issued rwts: total=2161,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:32.733 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:32.733 filename1: (groupid=0, jobs=1): err= 0: pid=82475: Wed Jul 24 21:41:15 2024 00:19:32.733 read: IOPS=206, BW=825KiB/s (845kB/s)(8292KiB/10048msec) 00:19:32.733 slat (usec): min=5, max=11230, avg=38.00, stdev=463.70 00:19:32.733 clat (msec): min=35, max=142, avg=77.34, stdev=20.11 00:19:32.733 lat (msec): min=36, max=142, avg=77.38, stdev=20.13 00:19:32.733 clat percentiles (msec): 00:19:32.733 | 1.00th=[ 46], 5.00th=[ 48], 10.00th=[ 49], 20.00th=[ 61], 00:19:32.733 | 30.00th=[ 69], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 81], 00:19:32.733 | 70.00th=[ 85], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 120], 00:19:32.733 | 99.00th=[ 121], 99.50th=[ 129], 99.90th=[ 140], 99.95th=[ 144], 00:19:32.733 | 99.99th=[ 144] 00:19:32.733 bw ( KiB/s): min= 592, max= 976, per=4.01%, avg=822.40, stdev=95.94, samples=20 00:19:32.733 iops : min= 148, max= 244, avg=205.55, stdev=23.99, samples=20 00:19:32.733 lat (msec) : 50=11.82%, 100=71.59%, 250=16.59% 00:19:32.733 cpu : usr=32.22%, sys=2.00%, ctx=1044, majf=0, minf=9 00:19:32.733 IO depths : 1=0.1%, 2=1.7%, 4=6.9%, 8=75.9%, 16=15.4%, 32=0.0%, >=64=0.0% 00:19:32.733 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:32.733 complete : 0=0.0%, 4=89.3%, 8=9.2%, 16=1.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:32.733 issued rwts: total=2073,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:32.733 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:32.733 filename1: (groupid=0, jobs=1): err= 0: pid=82476: Wed Jul 24 21:41:15 2024 00:19:32.733 read: IOPS=212, BW=848KiB/s (869kB/s)(8508KiB/10029msec) 00:19:32.733 slat (usec): min=3, max=5024, avg=22.46, stdev=185.73 00:19:32.733 clat (msec): min=30, max=134, avg=75.30, stdev=20.47 00:19:32.733 lat (msec): min=30, max=134, avg=75.32, stdev=20.47 00:19:32.733 clat percentiles (msec): 00:19:32.733 | 1.00th=[ 39], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 57], 00:19:32.733 | 30.00th=[ 67], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 77], 00:19:32.733 | 70.00th=[ 82], 80.00th=[ 89], 90.00th=[ 108], 95.00th=[ 117], 00:19:32.733 | 99.00th=[ 122], 99.50th=[ 127], 99.90th=[ 127], 99.95th=[ 134], 00:19:32.733 | 99.99th=[ 136] 00:19:32.733 bw ( KiB/s): min= 664, max= 1024, per=4.11%, avg=843.85, stdev=92.31, samples=20 00:19:32.733 iops : min= 166, max= 256, avg=210.95, stdev=23.06, samples=20 00:19:32.733 lat (msec) : 50=13.45%, 100=71.74%, 250=14.81% 00:19:32.733 cpu : usr=40.78%, sys=2.52%, ctx=1252, majf=0, minf=9 00:19:32.733 IO depths : 1=0.1%, 2=1.9%, 4=7.5%, 8=75.8%, 16=14.8%, 32=0.0%, >=64=0.0% 00:19:32.733 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:32.733 complete : 0=0.0%, 4=89.0%, 8=9.4%, 16=1.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:32.733 issued rwts: total=2127,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:32.733 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:32.733 filename1: (groupid=0, jobs=1): err= 0: pid=82477: Wed Jul 24 21:41:15 2024 00:19:32.733 read: IOPS=214, BW=858KiB/s (879kB/s)(8616KiB/10043msec) 00:19:32.733 slat (usec): min=4, max=3529, avg=15.32, stdev=75.97 00:19:32.733 clat (msec): min=5, max=156, avg=74.40, stdev=23.16 00:19:32.733 lat (msec): min=5, max=156, avg=74.41, stdev=23.17 00:19:32.733 clat percentiles (msec): 00:19:32.733 | 1.00th=[ 7], 5.00th=[ 43], 10.00th=[ 48], 20.00th=[ 59], 00:19:32.733 | 30.00th=[ 64], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 74], 00:19:32.733 | 70.00th=[ 84], 80.00th=[ 95], 90.00th=[ 109], 95.00th=[ 116], 00:19:32.733 | 99.00th=[ 121], 99.50th=[ 121], 99.90th=[ 144], 99.95th=[ 144], 00:19:32.733 | 99.99th=[ 157] 00:19:32.733 bw ( KiB/s): min= 584, max= 1383, per=4.18%, avg=857.55, stdev=165.66, samples=20 00:19:32.733 iops : min= 146, max= 345, avg=214.35, stdev=41.29, samples=20 00:19:32.733 lat (msec) : 10=1.95%, 20=0.84%, 50=11.28%, 100=70.43%, 250=15.51% 00:19:32.733 cpu : usr=37.20%, sys=2.23%, ctx=1184, majf=0, minf=9 00:19:32.733 IO depths : 1=0.1%, 2=1.6%, 4=5.9%, 8=76.7%, 16=15.6%, 32=0.0%, >=64=0.0% 00:19:32.733 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:32.733 complete : 0=0.0%, 4=89.0%, 8=9.7%, 16=1.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:32.733 issued rwts: total=2154,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:32.733 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:32.733 filename1: (groupid=0, jobs=1): err= 0: pid=82478: Wed Jul 24 21:41:15 2024 00:19:32.733 read: IOPS=202, BW=809KiB/s (829kB/s)(8116KiB/10031msec) 00:19:32.733 slat (usec): min=3, max=4026, avg=23.97, stdev=195.87 00:19:32.733 clat (msec): min=37, max=142, avg=78.89, stdev=19.85 00:19:32.733 lat (msec): min=37, max=142, avg=78.92, stdev=19.85 00:19:32.734 clat percentiles (msec): 00:19:32.734 | 1.00th=[ 45], 5.00th=[ 49], 10.00th=[ 54], 20.00th=[ 64], 00:19:32.734 | 30.00th=[ 68], 40.00th=[ 72], 50.00th=[ 75], 60.00th=[ 80], 00:19:32.734 | 70.00th=[ 88], 80.00th=[ 101], 90.00th=[ 110], 95.00th=[ 115], 00:19:32.734 | 99.00th=[ 125], 99.50th=[ 125], 99.90th=[ 133], 99.95th=[ 133], 00:19:32.734 | 99.99th=[ 144] 00:19:32.734 bw ( KiB/s): min= 640, max= 944, per=3.93%, avg=805.25, stdev=95.10, samples=20 00:19:32.734 iops : min= 160, max= 236, avg=201.30, stdev=23.78, samples=20 00:19:32.734 lat (msec) : 50=6.65%, 100=73.88%, 250=19.47% 00:19:32.734 cpu : usr=44.51%, sys=2.64%, ctx=1545, majf=0, minf=9 00:19:32.734 IO depths : 1=0.1%, 2=3.2%, 4=12.8%, 8=69.7%, 16=14.2%, 32=0.0%, >=64=0.0% 00:19:32.734 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:32.734 complete : 0=0.0%, 4=90.7%, 8=6.5%, 16=2.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:32.734 issued rwts: total=2029,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:32.734 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:32.734 filename1: (groupid=0, jobs=1): err= 0: pid=82479: Wed Jul 24 21:41:15 2024 00:19:32.734 read: IOPS=201, BW=805KiB/s (824kB/s)(8076KiB/10032msec) 00:19:32.734 slat (usec): min=4, max=8079, avg=32.60, stdev=315.03 00:19:32.734 clat (msec): min=25, max=144, avg=79.28, stdev=20.46 00:19:32.734 lat (msec): min=25, max=144, avg=79.31, stdev=20.46 00:19:32.734 clat percentiles (msec): 00:19:32.734 | 1.00th=[ 39], 5.00th=[ 48], 10.00th=[ 52], 20.00th=[ 65], 00:19:32.734 | 30.00th=[ 71], 40.00th=[ 72], 50.00th=[ 74], 60.00th=[ 81], 00:19:32.734 | 70.00th=[ 85], 80.00th=[ 96], 90.00th=[ 110], 95.00th=[ 120], 00:19:32.734 | 99.00th=[ 128], 99.50th=[ 144], 99.90th=[ 144], 99.95th=[ 144], 00:19:32.734 | 99.99th=[ 144] 00:19:32.734 bw ( KiB/s): min= 640, max= 1000, per=3.90%, avg=800.35, stdev=100.91, samples=20 00:19:32.734 iops : min= 160, max= 250, avg=200.05, stdev=25.18, samples=20 00:19:32.734 lat (msec) : 50=9.81%, 100=73.16%, 250=17.04% 00:19:32.734 cpu : usr=38.21%, sys=2.22%, ctx=1249, majf=0, minf=9 00:19:32.734 IO depths : 1=0.1%, 2=3.2%, 4=12.8%, 8=69.8%, 16=14.2%, 32=0.0%, >=64=0.0% 00:19:32.734 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:32.734 complete : 0=0.0%, 4=90.7%, 8=6.5%, 16=2.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:32.734 issued rwts: total=2019,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:32.734 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:32.734 filename1: (groupid=0, jobs=1): err= 0: pid=82480: Wed Jul 24 21:41:15 2024 00:19:32.734 read: IOPS=224, BW=896KiB/s (918kB/s)(8964KiB/10003msec) 00:19:32.734 slat (usec): min=7, max=8033, avg=20.45, stdev=239.46 00:19:32.734 clat (msec): min=2, max=144, avg=71.34, stdev=21.64 00:19:32.734 lat (msec): min=2, max=145, avg=71.36, stdev=21.65 00:19:32.734 clat percentiles (msec): 00:19:32.734 | 1.00th=[ 28], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 48], 00:19:32.734 | 30.00th=[ 61], 40.00th=[ 67], 50.00th=[ 72], 60.00th=[ 72], 00:19:32.734 | 70.00th=[ 81], 80.00th=[ 85], 90.00th=[ 108], 95.00th=[ 117], 00:19:32.734 | 99.00th=[ 121], 99.50th=[ 121], 99.90th=[ 121], 99.95th=[ 146], 00:19:32.734 | 99.99th=[ 146] 00:19:32.734 bw ( KiB/s): min= 664, max= 1024, per=4.30%, avg=881.68, stdev=110.19, samples=19 00:19:32.734 iops : min= 166, max= 256, avg=220.42, stdev=27.55, samples=19 00:19:32.734 lat (msec) : 4=0.31%, 10=0.27%, 50=22.36%, 100=65.37%, 250=11.69% 00:19:32.734 cpu : usr=31.22%, sys=1.99%, ctx=862, majf=0, minf=9 00:19:32.734 IO depths : 1=0.1%, 2=0.5%, 4=1.9%, 8=81.9%, 16=15.7%, 32=0.0%, >=64=0.0% 00:19:32.734 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:32.734 complete : 0=0.0%, 4=87.4%, 8=12.2%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:32.734 issued rwts: total=2241,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:32.734 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:32.734 filename1: (groupid=0, jobs=1): err= 0: pid=82481: Wed Jul 24 21:41:15 2024 00:19:32.734 read: IOPS=227, BW=910KiB/s (932kB/s)(9108KiB/10009msec) 00:19:32.734 slat (usec): min=4, max=8023, avg=27.31, stdev=279.20 00:19:32.734 clat (msec): min=20, max=155, avg=70.22, stdev=21.20 00:19:32.734 lat (msec): min=20, max=155, avg=70.25, stdev=21.20 00:19:32.734 clat percentiles (msec): 00:19:32.734 | 1.00th=[ 27], 5.00th=[ 42], 10.00th=[ 47], 20.00th=[ 49], 00:19:32.734 | 30.00th=[ 57], 40.00th=[ 65], 50.00th=[ 71], 60.00th=[ 72], 00:19:32.734 | 70.00th=[ 79], 80.00th=[ 85], 90.00th=[ 105], 95.00th=[ 114], 00:19:32.734 | 99.00th=[ 121], 99.50th=[ 122], 99.90th=[ 126], 99.95th=[ 155], 00:19:32.734 | 99.99th=[ 155] 00:19:32.734 bw ( KiB/s): min= 664, max= 1024, per=4.39%, avg=900.63, stdev=112.58, samples=19 00:19:32.734 iops : min= 166, max= 256, avg=225.16, stdev=28.14, samples=19 00:19:32.734 lat (msec) : 50=22.09%, 100=66.58%, 250=11.33% 00:19:32.734 cpu : usr=40.29%, sys=2.32%, ctx=1345, majf=0, minf=9 00:19:32.734 IO depths : 1=0.1%, 2=0.3%, 4=1.2%, 8=82.9%, 16=15.5%, 32=0.0%, >=64=0.0% 00:19:32.734 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:32.734 complete : 0=0.0%, 4=87.0%, 8=12.8%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:32.734 issued rwts: total=2277,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:32.734 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:32.734 filename2: (groupid=0, jobs=1): err= 0: pid=82482: Wed Jul 24 21:41:15 2024 00:19:32.734 read: IOPS=199, BW=800KiB/s (819kB/s)(8028KiB/10038msec) 00:19:32.734 slat (usec): min=4, max=8026, avg=21.49, stdev=252.87 00:19:32.734 clat (msec): min=35, max=144, avg=79.82, stdev=21.21 00:19:32.734 lat (msec): min=36, max=144, avg=79.84, stdev=21.21 00:19:32.734 clat percentiles (msec): 00:19:32.734 | 1.00th=[ 45], 5.00th=[ 48], 10.00th=[ 51], 20.00th=[ 64], 00:19:32.734 | 30.00th=[ 70], 40.00th=[ 72], 50.00th=[ 73], 60.00th=[ 83], 00:19:32.734 | 70.00th=[ 87], 80.00th=[ 104], 90.00th=[ 111], 95.00th=[ 120], 00:19:32.734 | 99.00th=[ 124], 99.50th=[ 132], 99.90th=[ 144], 99.95th=[ 144], 00:19:32.734 | 99.99th=[ 144] 00:19:32.734 bw ( KiB/s): min= 528, max= 1048, per=3.88%, avg=795.85, stdev=132.13, samples=20 00:19:32.734 iops : min= 132, max= 262, avg=198.90, stdev=32.98, samples=20 00:19:32.734 lat (msec) : 50=9.77%, 100=68.41%, 250=21.82% 00:19:32.734 cpu : usr=32.27%, sys=2.10%, ctx=946, majf=0, minf=9 00:19:32.734 IO depths : 1=0.1%, 2=3.0%, 4=12.1%, 8=70.4%, 16=14.5%, 32=0.0%, >=64=0.0% 00:19:32.734 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:32.734 complete : 0=0.0%, 4=90.6%, 8=6.7%, 16=2.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:32.734 issued rwts: total=2007,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:32.734 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:32.734 filename2: (groupid=0, jobs=1): err= 0: pid=82483: Wed Jul 24 21:41:15 2024 00:19:32.734 read: IOPS=219, BW=877KiB/s (898kB/s)(8796KiB/10026msec) 00:19:32.734 slat (usec): min=4, max=8027, avg=30.57, stdev=351.87 00:19:32.734 clat (msec): min=20, max=143, avg=72.78, stdev=21.51 00:19:32.734 lat (msec): min=20, max=143, avg=72.81, stdev=21.51 00:19:32.734 clat percentiles (msec): 00:19:32.734 | 1.00th=[ 36], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 51], 00:19:32.734 | 30.00th=[ 61], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 73], 00:19:32.734 | 70.00th=[ 81], 80.00th=[ 88], 90.00th=[ 108], 95.00th=[ 117], 00:19:32.734 | 99.00th=[ 122], 99.50th=[ 129], 99.90th=[ 132], 99.95th=[ 144], 00:19:32.734 | 99.99th=[ 144] 00:19:32.734 bw ( KiB/s): min= 640, max= 1024, per=4.26%, avg=874.35, stdev=109.54, samples=20 00:19:32.734 iops : min= 160, max= 256, avg=218.55, stdev=27.38, samples=20 00:19:32.734 lat (msec) : 50=19.37%, 100=66.48%, 250=14.14% 00:19:32.734 cpu : usr=34.67%, sys=2.44%, ctx=1109, majf=0, minf=9 00:19:32.734 IO depths : 1=0.1%, 2=0.3%, 4=1.0%, 8=82.5%, 16=16.1%, 32=0.0%, >=64=0.0% 00:19:32.734 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:32.734 complete : 0=0.0%, 4=87.4%, 8=12.4%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:32.734 issued rwts: total=2199,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:32.734 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:32.734 filename2: (groupid=0, jobs=1): err= 0: pid=82484: Wed Jul 24 21:41:15 2024 00:19:32.734 read: IOPS=218, BW=873KiB/s (893kB/s)(8756KiB/10035msec) 00:19:32.734 slat (nsec): min=3745, max=35900, avg=13258.59, stdev=4306.18 00:19:32.734 clat (msec): min=6, max=155, avg=73.18, stdev=22.74 00:19:32.734 lat (msec): min=6, max=155, avg=73.20, stdev=22.74 00:19:32.734 clat percentiles (msec): 00:19:32.734 | 1.00th=[ 9], 5.00th=[ 43], 10.00th=[ 48], 20.00th=[ 56], 00:19:32.734 | 30.00th=[ 62], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 74], 00:19:32.734 | 70.00th=[ 83], 80.00th=[ 92], 90.00th=[ 108], 95.00th=[ 117], 00:19:32.734 | 99.00th=[ 121], 99.50th=[ 125], 99.90th=[ 140], 99.95th=[ 144], 00:19:32.734 | 99.99th=[ 157] 00:19:32.734 bw ( KiB/s): min= 608, max= 1333, per=4.25%, avg=871.20, stdev=153.40, samples=20 00:19:32.734 iops : min= 152, max= 333, avg=217.75, stdev=38.33, samples=20 00:19:32.734 lat (msec) : 10=1.28%, 20=0.41%, 50=16.26%, 100=68.20%, 250=13.84% 00:19:32.734 cpu : usr=35.34%, sys=1.95%, ctx=1021, majf=0, minf=9 00:19:32.734 IO depths : 1=0.1%, 2=0.1%, 4=0.7%, 8=82.4%, 16=16.8%, 32=0.0%, >=64=0.0% 00:19:32.734 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:32.734 complete : 0=0.0%, 4=87.8%, 8=12.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:32.734 issued rwts: total=2189,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:32.734 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:32.734 filename2: (groupid=0, jobs=1): err= 0: pid=82485: Wed Jul 24 21:41:15 2024 00:19:32.734 read: IOPS=215, BW=863KiB/s (884kB/s)(8664KiB/10038msec) 00:19:32.734 slat (usec): min=4, max=8024, avg=18.87, stdev=192.54 00:19:32.734 clat (msec): min=26, max=139, avg=74.00, stdev=20.43 00:19:32.734 lat (msec): min=26, max=139, avg=74.02, stdev=20.43 00:19:32.734 clat percentiles (msec): 00:19:32.734 | 1.00th=[ 36], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 58], 00:19:32.734 | 30.00th=[ 64], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 74], 00:19:32.734 | 70.00th=[ 83], 80.00th=[ 88], 90.00th=[ 108], 95.00th=[ 114], 00:19:32.734 | 99.00th=[ 121], 99.50th=[ 125], 99.90th=[ 129], 99.95th=[ 136], 00:19:32.734 | 99.99th=[ 140] 00:19:32.734 bw ( KiB/s): min= 664, max= 1024, per=4.19%, avg=859.20, stdev=102.23, samples=20 00:19:32.734 iops : min= 166, max= 256, avg=214.80, stdev=25.56, samples=20 00:19:32.734 lat (msec) : 50=16.02%, 100=70.87%, 250=13.11% 00:19:32.734 cpu : usr=35.67%, sys=2.43%, ctx=1024, majf=0, minf=9 00:19:32.734 IO depths : 1=0.1%, 2=0.4%, 4=1.4%, 8=81.7%, 16=16.4%, 32=0.0%, >=64=0.0% 00:19:32.735 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:32.735 complete : 0=0.0%, 4=87.8%, 8=11.9%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:32.735 issued rwts: total=2166,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:32.735 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:32.735 filename2: (groupid=0, jobs=1): err= 0: pid=82486: Wed Jul 24 21:41:15 2024 00:19:32.735 read: IOPS=204, BW=816KiB/s (836kB/s)(8200KiB/10046msec) 00:19:32.735 slat (usec): min=3, max=8025, avg=27.41, stdev=309.55 00:19:32.735 clat (usec): min=1545, max=167180, avg=78164.68, stdev=29376.26 00:19:32.735 lat (usec): min=1553, max=167200, avg=78192.09, stdev=29373.10 00:19:32.735 clat percentiles (usec): 00:19:32.735 | 1.00th=[ 1647], 5.00th=[ 5538], 10.00th=[ 48497], 20.00th=[ 63177], 00:19:32.735 | 30.00th=[ 68682], 40.00th=[ 72877], 50.00th=[ 76022], 60.00th=[ 80217], 00:19:32.735 | 70.00th=[ 93848], 80.00th=[101188], 90.00th=[112722], 95.00th=[120062], 00:19:32.735 | 99.00th=[143655], 99.50th=[143655], 99.90th=[158335], 99.95th=[166724], 00:19:32.735 | 99.99th=[166724] 00:19:32.735 bw ( KiB/s): min= 400, max= 2032, per=3.97%, avg=813.60, stdev=314.16, samples=20 00:19:32.735 iops : min= 100, max= 508, avg=203.40, stdev=78.54, samples=20 00:19:32.735 lat (msec) : 2=3.12%, 4=0.15%, 10=2.88%, 20=0.78%, 50=3.37% 00:19:32.735 lat (msec) : 100=68.24%, 250=21.46% 00:19:32.735 cpu : usr=42.69%, sys=2.60%, ctx=1698, majf=0, minf=9 00:19:32.735 IO depths : 1=0.2%, 2=4.7%, 4=18.3%, 8=63.2%, 16=13.6%, 32=0.0%, >=64=0.0% 00:19:32.735 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:32.735 complete : 0=0.0%, 4=92.6%, 8=3.3%, 16=4.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:32.735 issued rwts: total=2050,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:32.735 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:32.735 filename2: (groupid=0, jobs=1): err= 0: pid=82487: Wed Jul 24 21:41:15 2024 00:19:32.735 read: IOPS=227, BW=911KiB/s (932kB/s)(9116KiB/10011msec) 00:19:32.735 slat (usec): min=4, max=8024, avg=29.57, stdev=318.77 00:19:32.735 clat (msec): min=22, max=125, avg=70.14, stdev=20.84 00:19:32.735 lat (msec): min=22, max=125, avg=70.17, stdev=20.83 00:19:32.735 clat percentiles (msec): 00:19:32.735 | 1.00th=[ 33], 5.00th=[ 43], 10.00th=[ 47], 20.00th=[ 50], 00:19:32.735 | 30.00th=[ 58], 40.00th=[ 64], 50.00th=[ 71], 60.00th=[ 72], 00:19:32.735 | 70.00th=[ 79], 80.00th=[ 85], 90.00th=[ 105], 95.00th=[ 112], 00:19:32.735 | 99.00th=[ 121], 99.50th=[ 124], 99.90th=[ 127], 99.95th=[ 127], 00:19:32.735 | 99.99th=[ 127] 00:19:32.735 bw ( KiB/s): min= 664, max= 1024, per=4.42%, avg=907.60, stdev=105.96, samples=20 00:19:32.735 iops : min= 166, max= 256, avg=226.90, stdev=26.49, samples=20 00:19:32.735 lat (msec) : 50=21.59%, 100=67.13%, 250=11.28% 00:19:32.735 cpu : usr=37.75%, sys=2.26%, ctx=1089, majf=0, minf=9 00:19:32.735 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=83.6%, 16=15.8%, 32=0.0%, >=64=0.0% 00:19:32.735 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:32.735 complete : 0=0.0%, 4=86.8%, 8=13.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:32.735 issued rwts: total=2279,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:32.735 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:32.735 filename2: (groupid=0, jobs=1): err= 0: pid=82488: Wed Jul 24 21:41:15 2024 00:19:32.735 read: IOPS=214, BW=859KiB/s (880kB/s)(8616KiB/10030msec) 00:19:32.735 slat (usec): min=4, max=2033, avg=15.15, stdev=43.75 00:19:32.735 clat (msec): min=35, max=138, avg=74.39, stdev=20.77 00:19:32.735 lat (msec): min=35, max=138, avg=74.40, stdev=20.77 00:19:32.735 clat percentiles (msec): 00:19:32.735 | 1.00th=[ 37], 5.00th=[ 48], 10.00th=[ 48], 20.00th=[ 56], 00:19:32.735 | 30.00th=[ 63], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 74], 00:19:32.735 | 70.00th=[ 81], 80.00th=[ 93], 90.00th=[ 108], 95.00th=[ 115], 00:19:32.735 | 99.00th=[ 122], 99.50th=[ 127], 99.90th=[ 136], 99.95th=[ 136], 00:19:32.735 | 99.99th=[ 138] 00:19:32.735 bw ( KiB/s): min= 640, max= 1024, per=4.17%, avg=855.00, stdev=110.38, samples=20 00:19:32.735 iops : min= 160, max= 256, avg=213.75, stdev=27.59, samples=20 00:19:32.735 lat (msec) : 50=14.90%, 100=69.82%, 250=15.27% 00:19:32.735 cpu : usr=37.95%, sys=2.33%, ctx=1130, majf=0, minf=9 00:19:32.735 IO depths : 1=0.1%, 2=1.2%, 4=4.8%, 8=78.6%, 16=15.4%, 32=0.0%, >=64=0.0% 00:19:32.735 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:32.735 complete : 0=0.0%, 4=88.3%, 8=10.7%, 16=1.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:32.735 issued rwts: total=2154,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:32.735 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:32.735 filename2: (groupid=0, jobs=1): err= 0: pid=82489: Wed Jul 24 21:41:15 2024 00:19:32.735 read: IOPS=230, BW=923KiB/s (945kB/s)(9232KiB/10002msec) 00:19:32.735 slat (usec): min=4, max=10071, avg=36.93, stdev=402.17 00:19:32.735 clat (usec): min=1810, max=152446, avg=69175.59, stdev=22870.42 00:19:32.735 lat (usec): min=1817, max=152460, avg=69212.52, stdev=22866.43 00:19:32.735 clat percentiles (msec): 00:19:32.735 | 1.00th=[ 4], 5.00th=[ 40], 10.00th=[ 46], 20.00th=[ 49], 00:19:32.735 | 30.00th=[ 57], 40.00th=[ 63], 50.00th=[ 70], 60.00th=[ 72], 00:19:32.735 | 70.00th=[ 78], 80.00th=[ 85], 90.00th=[ 105], 95.00th=[ 112], 00:19:32.735 | 99.00th=[ 125], 99.50th=[ 142], 99.90th=[ 142], 99.95th=[ 153], 00:19:32.735 | 99.99th=[ 153] 00:19:32.735 bw ( KiB/s): min= 664, max= 1080, per=4.40%, avg=901.53, stdev=120.79, samples=19 00:19:32.735 iops : min= 166, max= 270, avg=225.37, stdev=30.21, samples=19 00:19:32.735 lat (msec) : 2=0.39%, 4=0.82%, 10=0.30%, 50=21.75%, 100=65.73% 00:19:32.735 lat (msec) : 250=11.01% 00:19:32.735 cpu : usr=38.66%, sys=2.15%, ctx=1271, majf=0, minf=9 00:19:32.735 IO depths : 1=0.1%, 2=0.3%, 4=1.0%, 8=83.1%, 16=15.6%, 32=0.0%, >=64=0.0% 00:19:32.735 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:32.735 complete : 0=0.0%, 4=86.9%, 8=12.9%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:32.735 issued rwts: total=2308,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:32.735 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:32.735 00:19:32.735 Run status group 0 (all jobs): 00:19:32.735 READ: bw=20.0MiB/s (21.0MB/s), 800KiB/s-923KiB/s (819kB/s-945kB/s), io=201MiB (211MB), run=10002-10056msec 00:19:32.735 21:41:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:19:32.735 21:41:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:19:32.735 21:41:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:19:32.735 21:41:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:19:32.735 21:41:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:19:32.735 21:41:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:32.735 21:41:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.735 21:41:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:32.735 21:41:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.735 21:41:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:19:32.735 21:41:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.735 21:41:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:32.735 21:41:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.735 21:41:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:19:32.735 21:41:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:19:32.735 21:41:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:19:32.735 21:41:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:32.735 21:41:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.735 21:41:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:32.735 21:41:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.735 21:41:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:19:32.735 21:41:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.735 21:41:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:32.735 21:41:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.735 21:41:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:19:32.735 21:41:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:19:32.735 21:41:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:19:32.735 21:41:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:19:32.735 21:41:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.735 21:41:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:32.735 21:41:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.735 21:41:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:19:32.735 21:41:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.735 21:41:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:32.735 21:41:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.735 21:41:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:19:32.735 21:41:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:19:32.735 21:41:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:19:32.735 21:41:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:19:32.735 21:41:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:19:32.735 21:41:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:19:32.735 21:41:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:19:32.735 21:41:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:19:32.735 21:41:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:19:32.735 21:41:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:19:32.735 21:41:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:19:32.735 21:41:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:19:32.735 21:41:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.735 21:41:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:32.735 bdev_null0 00:19:32.735 21:41:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.735 21:41:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:19:32.735 21:41:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.736 21:41:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:32.736 21:41:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.736 21:41:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:19:32.736 21:41:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.736 21:41:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:32.736 21:41:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.736 21:41:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:32.736 21:41:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.736 21:41:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:32.736 [2024-07-24 21:41:16.028903] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:32.736 21:41:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.736 21:41:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:19:32.736 21:41:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:19:32.736 21:41:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:19:32.736 21:41:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:19:32.736 21:41:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.736 21:41:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:32.736 bdev_null1 00:19:32.736 21:41:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.736 21:41:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:19:32.736 21:41:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.736 21:41:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:32.736 21:41:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.736 21:41:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:19:32.736 21:41:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.736 21:41:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:32.736 21:41:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.736 21:41:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:32.736 21:41:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.736 21:41:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:32.736 21:41:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.736 21:41:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:19:32.736 21:41:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:32.736 21:41:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:19:32.736 21:41:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:32.736 21:41:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:19:32.736 21:41:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:19:32.736 21:41:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:19:32.736 21:41:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:19:32.736 21:41:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:32.736 21:41:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:19:32.736 21:41:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:19:32.736 21:41:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:32.736 21:41:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:19:32.736 21:41:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:19:32.736 21:41:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:32.736 21:41:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:19:32.736 21:41:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:32.736 { 00:19:32.736 "params": { 00:19:32.736 "name": "Nvme$subsystem", 00:19:32.736 "trtype": "$TEST_TRANSPORT", 00:19:32.736 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:32.736 "adrfam": "ipv4", 00:19:32.736 "trsvcid": "$NVMF_PORT", 00:19:32.736 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:32.736 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:32.736 "hdgst": ${hdgst:-false}, 00:19:32.736 "ddgst": ${ddgst:-false} 00:19:32.736 }, 00:19:32.736 "method": "bdev_nvme_attach_controller" 00:19:32.736 } 00:19:32.736 EOF 00:19:32.736 )") 00:19:32.736 21:41:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:19:32.736 21:41:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:19:32.736 21:41:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:19:32.736 21:41:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:19:32.736 21:41:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:32.736 21:41:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:19:32.736 21:41:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:19:32.736 21:41:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:19:32.736 21:41:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:19:32.736 21:41:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:32.736 21:41:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:19:32.736 21:41:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:32.736 { 00:19:32.736 "params": { 00:19:32.736 "name": "Nvme$subsystem", 00:19:32.736 "trtype": "$TEST_TRANSPORT", 00:19:32.736 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:32.736 "adrfam": "ipv4", 00:19:32.736 "trsvcid": "$NVMF_PORT", 00:19:32.736 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:32.736 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:32.736 "hdgst": ${hdgst:-false}, 00:19:32.736 "ddgst": ${ddgst:-false} 00:19:32.736 }, 00:19:32.736 "method": "bdev_nvme_attach_controller" 00:19:32.736 } 00:19:32.736 EOF 00:19:32.736 )") 00:19:32.736 21:41:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:19:32.736 21:41:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:19:32.736 21:41:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:19:32.736 21:41:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:19:32.736 21:41:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:32.736 "params": { 00:19:32.736 "name": "Nvme0", 00:19:32.736 "trtype": "tcp", 00:19:32.736 "traddr": "10.0.0.2", 00:19:32.736 "adrfam": "ipv4", 00:19:32.736 "trsvcid": "4420", 00:19:32.736 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:32.736 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:32.736 "hdgst": false, 00:19:32.736 "ddgst": false 00:19:32.736 }, 00:19:32.736 "method": "bdev_nvme_attach_controller" 00:19:32.736 },{ 00:19:32.736 "params": { 00:19:32.736 "name": "Nvme1", 00:19:32.736 "trtype": "tcp", 00:19:32.736 "traddr": "10.0.0.2", 00:19:32.736 "adrfam": "ipv4", 00:19:32.736 "trsvcid": "4420", 00:19:32.736 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:32.736 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:32.736 "hdgst": false, 00:19:32.736 "ddgst": false 00:19:32.736 }, 00:19:32.736 "method": "bdev_nvme_attach_controller" 00:19:32.736 }' 00:19:32.736 21:41:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:19:32.736 21:41:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:19:32.736 21:41:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:19:32.736 21:41:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:32.736 21:41:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:19:32.736 21:41:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:19:32.736 21:41:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:19:32.736 21:41:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:19:32.736 21:41:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:32.736 21:41:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:32.736 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:19:32.736 ... 00:19:32.736 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:19:32.736 ... 00:19:32.736 fio-3.35 00:19:32.736 Starting 4 threads 00:19:36.998 00:19:36.998 filename0: (groupid=0, jobs=1): err= 0: pid=82620: Wed Jul 24 21:41:21 2024 00:19:36.998 read: IOPS=2196, BW=17.2MiB/s (18.0MB/s)(85.8MiB/5003msec) 00:19:36.998 slat (nsec): min=5495, max=53710, avg=12620.81, stdev=3527.40 00:19:36.998 clat (usec): min=673, max=6998, avg=3599.28, stdev=715.96 00:19:36.998 lat (usec): min=681, max=7017, avg=3611.90, stdev=716.49 00:19:36.998 clat percentiles (usec): 00:19:36.998 | 1.00th=[ 1401], 5.00th=[ 1795], 10.00th=[ 2311], 20.00th=[ 3359], 00:19:36.998 | 30.00th=[ 3818], 40.00th=[ 3851], 50.00th=[ 3884], 60.00th=[ 3884], 00:19:36.998 | 70.00th=[ 3916], 80.00th=[ 3949], 90.00th=[ 4047], 95.00th=[ 4146], 00:19:36.998 | 99.00th=[ 4490], 99.50th=[ 4686], 99.90th=[ 4883], 99.95th=[ 4948], 00:19:36.998 | 99.99th=[ 6980] 00:19:36.998 bw ( KiB/s): min=16128, max=21008, per=27.13%, avg=17722.22, stdev=1529.47, samples=9 00:19:36.998 iops : min= 2016, max= 2626, avg=2215.22, stdev=191.19, samples=9 00:19:36.998 lat (usec) : 750=0.02%, 1000=0.05% 00:19:36.998 lat (msec) : 2=6.94%, 4=80.60%, 10=12.38% 00:19:36.998 cpu : usr=91.98%, sys=7.04%, ctx=25, majf=0, minf=0 00:19:36.998 IO depths : 1=0.1%, 2=16.2%, 4=55.4%, 8=28.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:36.998 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:36.998 complete : 0=0.0%, 4=93.7%, 8=6.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:36.998 issued rwts: total=10987,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:36.998 latency : target=0, window=0, percentile=100.00%, depth=8 00:19:36.998 filename0: (groupid=0, jobs=1): err= 0: pid=82621: Wed Jul 24 21:41:21 2024 00:19:36.998 read: IOPS=1990, BW=15.6MiB/s (16.3MB/s)(77.8MiB/5001msec) 00:19:36.998 slat (nsec): min=7777, max=53224, avg=14131.05, stdev=2795.92 00:19:36.998 clat (usec): min=1281, max=7000, avg=3963.83, stdev=309.09 00:19:36.998 lat (usec): min=1295, max=7018, avg=3977.96, stdev=309.21 00:19:36.998 clat percentiles (usec): 00:19:36.998 | 1.00th=[ 2606], 5.00th=[ 3785], 10.00th=[ 3818], 20.00th=[ 3851], 00:19:36.998 | 30.00th=[ 3851], 40.00th=[ 3884], 50.00th=[ 3884], 60.00th=[ 3916], 00:19:36.998 | 70.00th=[ 3949], 80.00th=[ 4178], 90.00th=[ 4228], 95.00th=[ 4490], 00:19:36.998 | 99.00th=[ 4948], 99.50th=[ 5276], 99.90th=[ 5407], 99.95th=[ 5473], 00:19:36.998 | 99.99th=[ 6980] 00:19:36.998 bw ( KiB/s): min=14848, max=17072, per=24.33%, avg=15893.33, stdev=663.04, samples=9 00:19:36.998 iops : min= 1856, max= 2134, avg=1986.67, stdev=82.88, samples=9 00:19:36.998 lat (msec) : 2=0.09%, 4=73.21%, 10=26.70% 00:19:36.998 cpu : usr=91.58%, sys=7.62%, ctx=5, majf=0, minf=9 00:19:36.998 IO depths : 1=0.1%, 2=24.0%, 4=50.8%, 8=25.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:36.998 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:36.998 complete : 0=0.0%, 4=90.4%, 8=9.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:36.998 issued rwts: total=9956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:36.998 latency : target=0, window=0, percentile=100.00%, depth=8 00:19:36.998 filename1: (groupid=0, jobs=1): err= 0: pid=82622: Wed Jul 24 21:41:21 2024 00:19:36.998 read: IOPS=2004, BW=15.7MiB/s (16.4MB/s)(78.4MiB/5002msec) 00:19:36.998 slat (nsec): min=3818, max=50605, avg=14305.50, stdev=2961.87 00:19:36.998 clat (usec): min=1294, max=7048, avg=3933.63, stdev=332.39 00:19:36.998 lat (usec): min=1307, max=7062, avg=3947.94, stdev=332.51 00:19:36.998 clat percentiles (usec): 00:19:36.998 | 1.00th=[ 2507], 5.00th=[ 3785], 10.00th=[ 3818], 20.00th=[ 3851], 00:19:36.998 | 30.00th=[ 3851], 40.00th=[ 3884], 50.00th=[ 3884], 60.00th=[ 3916], 00:19:36.998 | 70.00th=[ 3949], 80.00th=[ 4178], 90.00th=[ 4228], 95.00th=[ 4293], 00:19:36.998 | 99.00th=[ 4752], 99.50th=[ 5080], 99.90th=[ 5407], 99.95th=[ 5407], 00:19:36.998 | 99.99th=[ 6980] 00:19:36.998 bw ( KiB/s): min=14976, max=17264, per=24.53%, avg=16026.67, stdev=664.63, samples=9 00:19:36.998 iops : min= 1872, max= 2158, avg=2003.33, stdev=83.08, samples=9 00:19:36.998 lat (msec) : 2=0.30%, 4=74.03%, 10=25.68% 00:19:36.998 cpu : usr=92.00%, sys=7.10%, ctx=103, majf=0, minf=0 00:19:36.998 IO depths : 1=0.1%, 2=23.5%, 4=51.2%, 8=25.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:36.998 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:36.998 complete : 0=0.0%, 4=90.6%, 8=9.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:36.998 issued rwts: total=10029,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:36.998 latency : target=0, window=0, percentile=100.00%, depth=8 00:19:36.998 filename1: (groupid=0, jobs=1): err= 0: pid=82623: Wed Jul 24 21:41:21 2024 00:19:36.998 read: IOPS=1976, BW=15.4MiB/s (16.2MB/s)(77.2MiB/5001msec) 00:19:36.998 slat (usec): min=4, max=184, avg=13.05, stdev= 4.16 00:19:36.998 clat (usec): min=952, max=6327, avg=3995.76, stdev=409.82 00:19:36.998 lat (usec): min=961, max=6340, avg=4008.81, stdev=409.59 00:19:36.998 clat percentiles (usec): 00:19:36.998 | 1.00th=[ 2573], 5.00th=[ 3785], 10.00th=[ 3818], 20.00th=[ 3851], 00:19:36.998 | 30.00th=[ 3851], 40.00th=[ 3884], 50.00th=[ 3884], 60.00th=[ 3916], 00:19:36.998 | 70.00th=[ 3949], 80.00th=[ 4228], 90.00th=[ 4228], 95.00th=[ 4490], 00:19:36.998 | 99.00th=[ 5932], 99.50th=[ 6128], 99.90th=[ 6259], 99.95th=[ 6259], 00:19:36.998 | 99.99th=[ 6325] 00:19:36.998 bw ( KiB/s): min=14592, max=16384, per=24.13%, avg=15765.22, stdev=636.61, samples=9 00:19:36.998 iops : min= 1824, max= 2048, avg=1970.56, stdev=79.60, samples=9 00:19:36.998 lat (usec) : 1000=0.08% 00:19:36.999 lat (msec) : 2=0.52%, 4=71.82%, 10=27.59% 00:19:36.999 cpu : usr=91.54%, sys=7.32%, ctx=110, majf=0, minf=10 00:19:36.999 IO depths : 1=0.1%, 2=24.4%, 4=50.4%, 8=25.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:36.999 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:36.999 complete : 0=0.0%, 4=90.2%, 8=9.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:36.999 issued rwts: total=9882,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:36.999 latency : target=0, window=0, percentile=100.00%, depth=8 00:19:36.999 00:19:36.999 Run status group 0 (all jobs): 00:19:36.999 READ: bw=63.8MiB/s (66.9MB/s), 15.4MiB/s-17.2MiB/s (16.2MB/s-18.0MB/s), io=319MiB (335MB), run=5001-5003msec 00:19:37.257 21:41:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:19:37.257 21:41:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:19:37.257 21:41:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:19:37.257 21:41:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:19:37.257 21:41:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:19:37.258 21:41:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:37.258 21:41:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.258 21:41:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:37.258 21:41:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.258 21:41:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:19:37.258 21:41:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.258 21:41:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:37.258 21:41:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.258 21:41:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:19:37.258 21:41:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:19:37.258 21:41:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:19:37.258 21:41:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:37.258 21:41:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.258 21:41:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:37.258 21:41:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.258 21:41:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:19:37.258 21:41:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.258 21:41:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:37.258 21:41:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.258 00:19:37.258 real 0m23.458s 00:19:37.258 user 2m3.126s 00:19:37.258 sys 0m9.020s 00:19:37.258 21:41:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:37.258 ************************************ 00:19:37.258 END TEST fio_dif_rand_params 00:19:37.258 ************************************ 00:19:37.258 21:41:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:37.258 21:41:22 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:19:37.258 21:41:22 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:19:37.258 21:41:22 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:37.258 21:41:22 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:37.258 ************************************ 00:19:37.258 START TEST fio_dif_digest 00:19:37.258 ************************************ 00:19:37.258 21:41:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1125 -- # fio_dif_digest 00:19:37.258 21:41:22 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:19:37.258 21:41:22 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:19:37.258 21:41:22 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:19:37.258 21:41:22 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:19:37.258 21:41:22 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:19:37.258 21:41:22 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:19:37.258 21:41:22 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:19:37.258 21:41:22 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:19:37.258 21:41:22 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:19:37.258 21:41:22 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:19:37.258 21:41:22 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:19:37.258 21:41:22 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:19:37.258 21:41:22 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:19:37.258 21:41:22 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:19:37.258 21:41:22 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:19:37.258 21:41:22 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:19:37.258 21:41:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.258 21:41:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:19:37.258 bdev_null0 00:19:37.258 21:41:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.258 21:41:22 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:19:37.258 21:41:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.258 21:41:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:19:37.258 21:41:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.258 21:41:22 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:19:37.258 21:41:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.258 21:41:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:19:37.258 21:41:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.258 21:41:22 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:37.258 21:41:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.258 21:41:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:19:37.258 [2024-07-24 21:41:22.228906] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:37.258 21:41:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.258 21:41:22 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:19:37.258 21:41:22 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:19:37.258 21:41:22 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:37.258 21:41:22 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:19:37.258 21:41:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:37.258 21:41:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:19:37.258 21:41:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:37.258 21:41:22 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:19:37.258 21:41:22 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:19:37.258 21:41:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:19:37.258 21:41:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:37.258 21:41:22 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:19:37.258 21:41:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:19:37.258 21:41:22 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:19:37.258 21:41:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:19:37.258 21:41:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:19:37.258 21:41:22 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:37.258 21:41:22 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:19:37.258 21:41:22 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:37.258 { 00:19:37.258 "params": { 00:19:37.258 "name": "Nvme$subsystem", 00:19:37.258 "trtype": "$TEST_TRANSPORT", 00:19:37.258 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:37.258 "adrfam": "ipv4", 00:19:37.258 "trsvcid": "$NVMF_PORT", 00:19:37.258 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:37.258 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:37.258 "hdgst": ${hdgst:-false}, 00:19:37.258 "ddgst": ${ddgst:-false} 00:19:37.258 }, 00:19:37.258 "method": "bdev_nvme_attach_controller" 00:19:37.258 } 00:19:37.258 EOF 00:19:37.258 )") 00:19:37.258 21:41:22 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:19:37.258 21:41:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:19:37.258 21:41:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:37.258 21:41:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:19:37.258 21:41:22 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:19:37.258 21:41:22 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:19:37.258 21:41:22 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:19:37.258 21:41:22 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:19:37.258 21:41:22 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:37.258 "params": { 00:19:37.258 "name": "Nvme0", 00:19:37.258 "trtype": "tcp", 00:19:37.258 "traddr": "10.0.0.2", 00:19:37.258 "adrfam": "ipv4", 00:19:37.258 "trsvcid": "4420", 00:19:37.258 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:37.259 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:37.259 "hdgst": true, 00:19:37.259 "ddgst": true 00:19:37.259 }, 00:19:37.259 "method": "bdev_nvme_attach_controller" 00:19:37.259 }' 00:19:37.518 21:41:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:19:37.518 21:41:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:19:37.518 21:41:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:19:37.518 21:41:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:37.518 21:41:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:19:37.518 21:41:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:19:37.518 21:41:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:19:37.518 21:41:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:19:37.518 21:41:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:37.518 21:41:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:37.518 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:19:37.518 ... 00:19:37.518 fio-3.35 00:19:37.518 Starting 3 threads 00:19:49.744 00:19:49.744 filename0: (groupid=0, jobs=1): err= 0: pid=82729: Wed Jul 24 21:41:32 2024 00:19:49.744 read: IOPS=225, BW=28.2MiB/s (29.5MB/s)(282MiB/10004msec) 00:19:49.744 slat (nsec): min=7483, max=51168, avg=14829.47, stdev=4630.24 00:19:49.744 clat (usec): min=12456, max=15932, avg=13285.43, stdev=240.67 00:19:49.744 lat (usec): min=12464, max=15953, avg=13300.26, stdev=240.90 00:19:49.744 clat percentiles (usec): 00:19:49.744 | 1.00th=[12911], 5.00th=[13042], 10.00th=[13042], 20.00th=[13173], 00:19:49.744 | 30.00th=[13173], 40.00th=[13173], 50.00th=[13173], 60.00th=[13304], 00:19:49.744 | 70.00th=[13304], 80.00th=[13435], 90.00th=[13566], 95.00th=[13698], 00:19:49.744 | 99.00th=[13829], 99.50th=[14353], 99.90th=[15926], 99.95th=[15926], 00:19:49.744 | 99.99th=[15926] 00:19:49.744 bw ( KiB/s): min=28416, max=29242, per=33.32%, avg=28811.25, stdev=389.40, samples=20 00:19:49.744 iops : min= 222, max= 228, avg=224.95, stdev= 3.03, samples=20 00:19:49.744 lat (msec) : 20=100.00% 00:19:49.744 cpu : usr=91.49%, sys=7.88%, ctx=7, majf=0, minf=0 00:19:49.744 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:49.744 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:49.744 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:49.744 issued rwts: total=2253,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:49.744 latency : target=0, window=0, percentile=100.00%, depth=3 00:19:49.744 filename0: (groupid=0, jobs=1): err= 0: pid=82730: Wed Jul 24 21:41:32 2024 00:19:49.744 read: IOPS=225, BW=28.1MiB/s (29.5MB/s)(282MiB/10006msec) 00:19:49.744 slat (nsec): min=7416, max=49467, avg=13350.54, stdev=3693.64 00:19:49.744 clat (usec): min=12461, max=15904, avg=13293.60, stdev=245.68 00:19:49.744 lat (usec): min=12469, max=15926, avg=13306.95, stdev=245.76 00:19:49.744 clat percentiles (usec): 00:19:49.744 | 1.00th=[12911], 5.00th=[13042], 10.00th=[13042], 20.00th=[13173], 00:19:49.744 | 30.00th=[13173], 40.00th=[13173], 50.00th=[13173], 60.00th=[13304], 00:19:49.744 | 70.00th=[13435], 80.00th=[13435], 90.00th=[13566], 95.00th=[13698], 00:19:49.744 | 99.00th=[13829], 99.50th=[14615], 99.90th=[15926], 99.95th=[15926], 00:19:49.744 | 99.99th=[15926] 00:19:49.744 bw ( KiB/s): min=28416, max=29242, per=33.32%, avg=28808.45, stdev=392.16, samples=20 00:19:49.744 iops : min= 222, max= 228, avg=224.95, stdev= 3.03, samples=20 00:19:49.744 lat (msec) : 20=100.00% 00:19:49.744 cpu : usr=90.99%, sys=8.51%, ctx=5, majf=0, minf=0 00:19:49.744 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:49.744 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:49.744 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:49.744 issued rwts: total=2253,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:49.744 latency : target=0, window=0, percentile=100.00%, depth=3 00:19:49.744 filename0: (groupid=0, jobs=1): err= 0: pid=82731: Wed Jul 24 21:41:32 2024 00:19:49.744 read: IOPS=225, BW=28.2MiB/s (29.5MB/s)(282MiB/10004msec) 00:19:49.744 slat (nsec): min=7299, max=48583, avg=10238.90, stdev=3464.89 00:19:49.744 clat (usec): min=9450, max=16271, avg=13294.87, stdev=296.20 00:19:49.744 lat (usec): min=9459, max=16297, avg=13305.11, stdev=296.36 00:19:49.744 clat percentiles (usec): 00:19:49.744 | 1.00th=[12911], 5.00th=[13042], 10.00th=[13042], 20.00th=[13173], 00:19:49.744 | 30.00th=[13173], 40.00th=[13173], 50.00th=[13173], 60.00th=[13304], 00:19:49.744 | 70.00th=[13435], 80.00th=[13435], 90.00th=[13566], 95.00th=[13698], 00:19:49.744 | 99.00th=[13960], 99.50th=[14484], 99.90th=[16188], 99.95th=[16319], 00:19:49.744 | 99.99th=[16319] 00:19:49.744 bw ( KiB/s): min=28302, max=29242, per=33.33%, avg=28814.26, stdev=407.79, samples=19 00:19:49.744 iops : min= 221, max= 228, avg=225.05, stdev= 3.21, samples=19 00:19:49.744 lat (msec) : 10=0.13%, 20=99.87% 00:19:49.744 cpu : usr=91.53%, sys=7.89%, ctx=215, majf=0, minf=9 00:19:49.744 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:49.744 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:49.744 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:49.745 issued rwts: total=2253,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:49.745 latency : target=0, window=0, percentile=100.00%, depth=3 00:19:49.745 00:19:49.745 Run status group 0 (all jobs): 00:19:49.745 READ: bw=84.4MiB/s (88.5MB/s), 28.1MiB/s-28.2MiB/s (29.5MB/s-29.5MB/s), io=845MiB (886MB), run=10004-10006msec 00:19:49.745 21:41:33 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:19:49.745 21:41:33 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:19:49.745 21:41:33 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:19:49.745 21:41:33 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:19:49.745 21:41:33 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:19:49.745 21:41:33 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:49.745 21:41:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.745 21:41:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:19:49.745 21:41:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.745 21:41:33 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:19:49.745 21:41:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.745 21:41:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:19:49.745 ************************************ 00:19:49.745 END TEST fio_dif_digest 00:19:49.745 ************************************ 00:19:49.745 21:41:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.745 00:19:49.745 real 0m10.991s 00:19:49.745 user 0m28.045s 00:19:49.745 sys 0m2.688s 00:19:49.745 21:41:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:49.745 21:41:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:19:49.745 21:41:33 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:19:49.745 21:41:33 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:19:49.745 21:41:33 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:49.745 21:41:33 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:19:49.745 21:41:33 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:49.745 21:41:33 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:19:49.745 21:41:33 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:49.745 21:41:33 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:49.745 rmmod nvme_tcp 00:19:49.745 rmmod nvme_fabrics 00:19:49.745 rmmod nvme_keyring 00:19:49.745 21:41:33 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:49.745 21:41:33 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:19:49.745 21:41:33 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:19:49.745 21:41:33 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 81980 ']' 00:19:49.745 21:41:33 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 81980 00:19:49.745 21:41:33 nvmf_dif -- common/autotest_common.sh@950 -- # '[' -z 81980 ']' 00:19:49.745 21:41:33 nvmf_dif -- common/autotest_common.sh@954 -- # kill -0 81980 00:19:49.745 21:41:33 nvmf_dif -- common/autotest_common.sh@955 -- # uname 00:19:49.745 21:41:33 nvmf_dif -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:49.745 21:41:33 nvmf_dif -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81980 00:19:49.745 killing process with pid 81980 00:19:49.745 21:41:33 nvmf_dif -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:49.745 21:41:33 nvmf_dif -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:49.745 21:41:33 nvmf_dif -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81980' 00:19:49.745 21:41:33 nvmf_dif -- common/autotest_common.sh@969 -- # kill 81980 00:19:49.745 21:41:33 nvmf_dif -- common/autotest_common.sh@974 -- # wait 81980 00:19:49.745 21:41:33 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:19:49.745 21:41:33 nvmf_dif -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:19:49.745 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:49.745 Waiting for block devices as requested 00:19:49.745 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:19:49.745 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:19:49.745 21:41:34 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:49.745 21:41:34 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:49.745 21:41:34 nvmf_dif -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:49.745 21:41:34 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:49.745 21:41:34 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:49.745 21:41:34 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:19:49.745 21:41:34 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:49.745 21:41:34 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:49.745 ************************************ 00:19:49.745 END TEST nvmf_dif 00:19:49.745 ************************************ 00:19:49.745 00:19:49.745 real 0m59.689s 00:19:49.745 user 3m46.981s 00:19:49.745 sys 0m20.388s 00:19:49.745 21:41:34 nvmf_dif -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:49.745 21:41:34 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:49.745 21:41:34 -- spdk/autotest.sh@297 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:19:49.745 21:41:34 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:19:49.745 21:41:34 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:49.745 21:41:34 -- common/autotest_common.sh@10 -- # set +x 00:19:49.745 ************************************ 00:19:49.745 START TEST nvmf_abort_qd_sizes 00:19:49.745 ************************************ 00:19:49.745 21:41:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:19:49.745 * Looking for test storage... 00:19:49.745 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:49.745 21:41:34 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:49.745 21:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:19:49.745 21:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:49.745 21:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:49.745 21:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:49.745 21:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:49.745 21:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:49.745 21:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:49.745 21:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:49.745 21:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:49.745 21:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:49.745 21:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:49.745 21:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:19:49.745 21:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:19:49.745 21:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:49.745 21:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:49.745 21:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:49.745 21:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:49.745 21:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:49.745 21:41:34 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:49.745 21:41:34 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:49.745 21:41:34 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:49.745 21:41:34 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:49.745 21:41:34 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:49.745 21:41:34 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:49.745 21:41:34 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:19:49.745 21:41:34 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:49.745 21:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:19:49.745 21:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:49.745 21:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:49.745 21:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:49.745 21:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:49.745 21:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:49.745 21:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:49.745 21:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:49.745 21:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:49.745 21:41:34 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:19:49.745 21:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:49.745 21:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:49.745 21:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:49.745 21:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:49.745 21:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:49.745 21:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:49.745 21:41:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:19:49.746 21:41:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:49.746 21:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:19:49.746 21:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:19:49.746 21:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:19:49.746 21:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:19:49.746 21:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:19:49.746 21:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # nvmf_veth_init 00:19:49.746 21:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:49.746 21:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:49.746 21:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:49.746 21:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:49.746 21:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:49.746 21:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:49.746 21:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:49.746 21:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:49.746 21:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:49.746 21:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:49.746 21:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:49.746 21:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:49.746 21:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:49.746 21:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:49.746 Cannot find device "nvmf_tgt_br" 00:19:49.746 21:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # true 00:19:49.746 21:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:49.746 Cannot find device "nvmf_tgt_br2" 00:19:49.746 21:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # true 00:19:49.746 21:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:49.746 21:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:49.746 Cannot find device "nvmf_tgt_br" 00:19:49.746 21:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # true 00:19:49.746 21:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:49.746 Cannot find device "nvmf_tgt_br2" 00:19:49.746 21:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # true 00:19:49.746 21:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:49.746 21:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:49.746 21:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:49.746 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:49.746 21:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:19:49.746 21:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:49.746 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:49.746 21:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:19:49.746 21:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:49.746 21:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:49.746 21:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:49.746 21:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:49.746 21:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:49.746 21:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:49.746 21:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:49.746 21:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:49.746 21:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:49.746 21:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:49.746 21:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:49.746 21:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:49.746 21:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:49.746 21:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:49.746 21:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:49.746 21:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:49.746 21:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:49.746 21:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:49.746 21:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:49.746 21:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:49.746 21:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:49.746 21:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:49.746 21:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:49.746 21:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:49.746 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:49.746 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.093 ms 00:19:49.746 00:19:49.746 --- 10.0.0.2 ping statistics --- 00:19:49.746 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:49.746 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:19:49.746 21:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:49.746 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:49.746 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:19:49.746 00:19:49.746 --- 10.0.0.3 ping statistics --- 00:19:49.746 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:49.746 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:19:49.746 21:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:49.746 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:49.746 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:19:49.746 00:19:49.746 --- 10.0.0.1 ping statistics --- 00:19:49.746 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:49.746 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:19:49.746 21:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:49.746 21:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@433 -- # return 0 00:19:49.746 21:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:19:49.746 21:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:50.681 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:50.681 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:19:50.681 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:19:50.681 21:41:35 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:50.681 21:41:35 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:50.681 21:41:35 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:50.682 21:41:35 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:50.682 21:41:35 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:50.682 21:41:35 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:50.682 21:41:35 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:19:50.682 21:41:35 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:50.682 21:41:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:50.682 21:41:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:19:50.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:50.682 21:41:35 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=83332 00:19:50.682 21:41:35 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 83332 00:19:50.682 21:41:35 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:19:50.682 21:41:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # '[' -z 83332 ']' 00:19:50.682 21:41:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:50.682 21:41:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:50.682 21:41:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:50.682 21:41:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:50.682 21:41:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:19:50.941 [2024-07-24 21:41:35.713270] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:19:50.941 [2024-07-24 21:41:35.713710] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:50.941 [2024-07-24 21:41:35.862564] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:51.199 [2024-07-24 21:41:36.001912] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:51.199 [2024-07-24 21:41:36.002266] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:51.199 [2024-07-24 21:41:36.002546] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:51.199 [2024-07-24 21:41:36.002569] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:51.199 [2024-07-24 21:41:36.002580] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:51.199 [2024-07-24 21:41:36.002766] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:51.199 [2024-07-24 21:41:36.002868] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:51.199 [2024-07-24 21:41:36.003030] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:51.199 [2024-07-24 21:41:36.003042] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:51.199 [2024-07-24 21:41:36.065567] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:51.766 21:41:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:51.766 21:41:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # return 0 00:19:51.766 21:41:36 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:51.766 21:41:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:51.766 21:41:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:19:51.766 21:41:36 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:51.766 21:41:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:19:51.766 21:41:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:19:51.766 21:41:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:19:51.766 21:41:36 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:19:51.766 21:41:36 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:19:51.766 21:41:36 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n '' ]] 00:19:51.766 21:41:36 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:19:51.766 21:41:36 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:19:51.766 21:41:36 nvmf_abort_qd_sizes -- scripts/common.sh@295 -- # local bdf= 00:19:51.766 21:41:36 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:19:51.766 21:41:36 nvmf_abort_qd_sizes -- scripts/common.sh@230 -- # local class 00:19:51.766 21:41:36 nvmf_abort_qd_sizes -- scripts/common.sh@231 -- # local subclass 00:19:51.766 21:41:36 nvmf_abort_qd_sizes -- scripts/common.sh@232 -- # local progif 00:19:51.766 21:41:36 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # printf %02x 1 00:19:51.766 21:41:36 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # class=01 00:19:51.766 21:41:36 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # printf %02x 8 00:19:51.766 21:41:36 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # subclass=08 00:19:51.766 21:41:36 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # printf %02x 2 00:19:51.766 21:41:36 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # progif=02 00:19:51.766 21:41:36 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # hash lspci 00:19:51.766 21:41:36 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:19:51.766 21:41:36 nvmf_abort_qd_sizes -- scripts/common.sh@239 -- # lspci -mm -n -D 00:19:51.766 21:41:36 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # grep -i -- -p02 00:19:51.766 21:41:36 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:19:51.766 21:41:36 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # tr -d '"' 00:19:51.766 21:41:36 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:19:51.766 21:41:36 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:19:51.766 21:41:36 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:19:51.766 21:41:36 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:19:51.766 21:41:36 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:19:51.766 21:41:36 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:19:51.766 21:41:36 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:19:51.766 21:41:36 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:19:51.766 21:41:36 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:19:51.766 21:41:36 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:19:51.766 21:41:36 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:19:51.766 21:41:36 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:19:51.766 21:41:36 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:19:51.766 21:41:36 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:19:51.766 21:41:36 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:19:51.766 21:41:36 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:19:51.766 21:41:36 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:19:51.766 21:41:36 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:19:51.766 21:41:36 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:19:51.766 21:41:36 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:19:51.766 21:41:36 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:19:51.766 21:41:36 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:19:51.766 21:41:36 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:19:51.766 21:41:36 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:19:51.766 21:41:36 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 2 )) 00:19:51.766 21:41:36 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:19:51.766 21:41:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:19:51.766 21:41:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:19:51.766 21:41:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:19:51.766 21:41:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:19:51.766 21:41:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:51.766 21:41:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:19:51.766 ************************************ 00:19:51.766 START TEST spdk_target_abort 00:19:51.766 ************************************ 00:19:51.766 21:41:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1125 -- # spdk_target 00:19:51.766 21:41:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:19:51.767 21:41:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:19:51.767 21:41:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.767 21:41:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:19:52.025 spdk_targetn1 00:19:52.025 21:41:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.025 21:41:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:52.025 21:41:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.025 21:41:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:19:52.025 [2024-07-24 21:41:36.825119] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:52.025 21:41:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.025 21:41:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:19:52.025 21:41:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.025 21:41:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:19:52.025 21:41:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.025 21:41:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:19:52.025 21:41:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.025 21:41:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:19:52.025 21:41:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.025 21:41:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:19:52.025 21:41:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.025 21:41:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:19:52.025 [2024-07-24 21:41:36.853285] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:52.025 21:41:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.025 21:41:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:19:52.025 21:41:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:19:52.025 21:41:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:19:52.025 21:41:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:19:52.025 21:41:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:19:52.025 21:41:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:19:52.025 21:41:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:19:52.026 21:41:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:19:52.026 21:41:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:19:52.026 21:41:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:52.026 21:41:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:19:52.026 21:41:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:52.026 21:41:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:19:52.026 21:41:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:52.026 21:41:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:19:52.026 21:41:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:52.026 21:41:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:52.026 21:41:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:52.026 21:41:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:19:52.026 21:41:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:19:52.026 21:41:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:19:55.312 Initializing NVMe Controllers 00:19:55.312 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:19:55.312 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:19:55.312 Initialization complete. Launching workers. 00:19:55.312 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 10601, failed: 0 00:19:55.312 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1030, failed to submit 9571 00:19:55.312 success 786, unsuccess 244, failed 0 00:19:55.312 21:41:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:19:55.312 21:41:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:19:58.595 Initializing NVMe Controllers 00:19:58.595 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:19:58.595 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:19:58.595 Initialization complete. Launching workers. 00:19:58.595 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 9020, failed: 0 00:19:58.595 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1163, failed to submit 7857 00:19:58.595 success 396, unsuccess 767, failed 0 00:19:58.596 21:41:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:19:58.596 21:41:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:01.880 Initializing NVMe Controllers 00:20:01.880 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:20:01.880 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:20:01.880 Initialization complete. Launching workers. 00:20:01.880 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 30496, failed: 0 00:20:01.880 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2285, failed to submit 28211 00:20:01.880 success 413, unsuccess 1872, failed 0 00:20:01.880 21:41:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:20:01.880 21:41:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.880 21:41:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:01.880 21:41:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.880 21:41:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:20:01.880 21:41:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.880 21:41:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:02.447 21:41:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.447 21:41:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 83332 00:20:02.447 21:41:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # '[' -z 83332 ']' 00:20:02.447 21:41:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # kill -0 83332 00:20:02.447 21:41:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # uname 00:20:02.447 21:41:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:02.447 21:41:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83332 00:20:02.447 killing process with pid 83332 00:20:02.447 21:41:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:02.447 21:41:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:02.447 21:41:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83332' 00:20:02.447 21:41:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@969 -- # kill 83332 00:20:02.447 21:41:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@974 -- # wait 83332 00:20:02.706 ************************************ 00:20:02.706 END TEST spdk_target_abort 00:20:02.706 ************************************ 00:20:02.706 00:20:02.706 real 0m10.744s 00:20:02.706 user 0m42.934s 00:20:02.706 sys 0m2.351s 00:20:02.706 21:41:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:02.706 21:41:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:02.706 21:41:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:20:02.706 21:41:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:20:02.706 21:41:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:02.706 21:41:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:20:02.706 ************************************ 00:20:02.706 START TEST kernel_target_abort 00:20:02.706 ************************************ 00:20:02.706 21:41:47 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1125 -- # kernel_target 00:20:02.706 21:41:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:20:02.706 21:41:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:20:02.706 21:41:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:02.706 21:41:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:02.706 21:41:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:02.706 21:41:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:02.706 21:41:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:02.706 21:41:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:02.706 21:41:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:02.707 21:41:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:02.707 21:41:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:02.707 21:41:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:20:02.707 21:41:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:20:02.707 21:41:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:20:02.707 21:41:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:02.707 21:41:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:20:02.707 21:41:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:20:02.707 21:41:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:20:02.707 21:41:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:20:02.707 21:41:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:20:02.707 21:41:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:20:02.707 21:41:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:02.965 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:02.965 Waiting for block devices as requested 00:20:03.224 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:20:03.224 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:20:03.224 21:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:20:03.224 21:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:20:03.224 21:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:20:03.224 21:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:20:03.224 21:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:20:03.224 21:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:20:03.224 21:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:20:03.224 21:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:20:03.224 21:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:20:03.224 No valid GPT data, bailing 00:20:03.224 21:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:20:03.224 21:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:20:03.224 21:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:20:03.224 21:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:20:03.224 21:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:20:03.224 21:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:20:03.224 21:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:20:03.224 21:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:20:03.225 21:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:20:03.225 21:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:20:03.225 21:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:20:03.225 21:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:20:03.225 21:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:20:03.488 No valid GPT data, bailing 00:20:03.488 21:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:20:03.488 21:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:20:03.488 21:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:20:03.488 21:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:20:03.488 21:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:20:03.488 21:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:20:03.488 21:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:20:03.488 21:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:20:03.488 21:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:20:03.488 21:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:20:03.488 21:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:20:03.488 21:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:20:03.488 21:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:20:03.488 No valid GPT data, bailing 00:20:03.488 21:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:20:03.488 21:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:20:03.488 21:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:20:03.488 21:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:20:03.488 21:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:20:03.488 21:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:20:03.488 21:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:20:03.488 21:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:20:03.488 21:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:20:03.488 21:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:20:03.488 21:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:20:03.488 21:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:20:03.488 21:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:20:03.488 No valid GPT data, bailing 00:20:03.488 21:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:20:03.488 21:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:20:03.488 21:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:20:03.488 21:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:20:03.488 21:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:20:03.488 21:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:03.488 21:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:20:03.488 21:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:20:03.488 21:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:20:03.488 21:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:20:03.488 21:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:20:03.488 21:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:20:03.488 21:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:20:03.488 21:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:20:03.488 21:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:20:03.488 21:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:20:03.488 21:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:20:03.488 21:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 --hostid=987211d5-ddc7-4d0a-8ba2-cf43288d1158 -a 10.0.0.1 -t tcp -s 4420 00:20:03.752 00:20:03.752 Discovery Log Number of Records 2, Generation counter 2 00:20:03.752 =====Discovery Log Entry 0====== 00:20:03.752 trtype: tcp 00:20:03.752 adrfam: ipv4 00:20:03.752 subtype: current discovery subsystem 00:20:03.752 treq: not specified, sq flow control disable supported 00:20:03.752 portid: 1 00:20:03.752 trsvcid: 4420 00:20:03.752 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:20:03.752 traddr: 10.0.0.1 00:20:03.752 eflags: none 00:20:03.752 sectype: none 00:20:03.752 =====Discovery Log Entry 1====== 00:20:03.752 trtype: tcp 00:20:03.752 adrfam: ipv4 00:20:03.752 subtype: nvme subsystem 00:20:03.752 treq: not specified, sq flow control disable supported 00:20:03.752 portid: 1 00:20:03.752 trsvcid: 4420 00:20:03.752 subnqn: nqn.2016-06.io.spdk:testnqn 00:20:03.752 traddr: 10.0.0.1 00:20:03.752 eflags: none 00:20:03.752 sectype: none 00:20:03.752 21:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:20:03.752 21:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:20:03.752 21:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:20:03.752 21:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:20:03.752 21:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:20:03.752 21:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:20:03.752 21:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:20:03.752 21:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:20:03.752 21:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:20:03.752 21:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:03.752 21:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:20:03.752 21:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:03.752 21:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:20:03.752 21:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:03.752 21:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:20:03.752 21:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:03.752 21:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:20:03.752 21:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:03.752 21:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:03.752 21:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:03.752 21:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:07.038 Initializing NVMe Controllers 00:20:07.038 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:20:07.038 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:20:07.038 Initialization complete. Launching workers. 00:20:07.038 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31240, failed: 0 00:20:07.038 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 31240, failed to submit 0 00:20:07.038 success 0, unsuccess 31240, failed 0 00:20:07.038 21:41:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:07.038 21:41:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:10.407 Initializing NVMe Controllers 00:20:10.407 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:20:10.407 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:20:10.407 Initialization complete. Launching workers. 00:20:10.407 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 67125, failed: 0 00:20:10.407 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 28784, failed to submit 38341 00:20:10.407 success 0, unsuccess 28784, failed 0 00:20:10.407 21:41:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:10.407 21:41:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:13.695 Initializing NVMe Controllers 00:20:13.695 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:20:13.695 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:20:13.695 Initialization complete. Launching workers. 00:20:13.695 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 82101, failed: 0 00:20:13.695 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 20520, failed to submit 61581 00:20:13.695 success 0, unsuccess 20520, failed 0 00:20:13.695 21:41:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:20:13.695 21:41:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:20:13.695 21:41:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:20:13.695 21:41:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:13.695 21:41:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:20:13.695 21:41:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:20:13.695 21:41:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:13.695 21:41:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:20:13.695 21:41:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:20:13.695 21:41:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:13.954 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:15.854 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:20:15.854 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:20:16.112 00:20:16.112 real 0m13.334s 00:20:16.112 user 0m6.087s 00:20:16.112 sys 0m4.600s 00:20:16.112 21:42:00 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:16.112 21:42:00 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:16.112 ************************************ 00:20:16.112 END TEST kernel_target_abort 00:20:16.112 ************************************ 00:20:16.112 21:42:00 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:20:16.112 21:42:00 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:20:16.112 21:42:00 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:16.112 21:42:00 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:20:16.112 21:42:00 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:16.112 21:42:00 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:20:16.112 21:42:00 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:16.112 21:42:00 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:16.112 rmmod nvme_tcp 00:20:16.112 rmmod nvme_fabrics 00:20:16.112 rmmod nvme_keyring 00:20:16.112 21:42:00 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:16.112 21:42:00 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:20:16.112 21:42:00 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:20:16.112 21:42:00 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 83332 ']' 00:20:16.112 21:42:00 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 83332 00:20:16.112 21:42:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # '[' -z 83332 ']' 00:20:16.112 21:42:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # kill -0 83332 00:20:16.112 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (83332) - No such process 00:20:16.112 Process with pid 83332 is not found 00:20:16.112 21:42:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@977 -- # echo 'Process with pid 83332 is not found' 00:20:16.112 21:42:00 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:20:16.112 21:42:00 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:16.370 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:16.370 Waiting for block devices as requested 00:20:16.628 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:20:16.628 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:20:16.628 21:42:01 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:16.628 21:42:01 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:16.628 21:42:01 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:16.628 21:42:01 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:16.628 21:42:01 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:16.628 21:42:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:20:16.628 21:42:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:16.628 21:42:01 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:16.628 ************************************ 00:20:16.628 END TEST nvmf_abort_qd_sizes 00:20:16.628 ************************************ 00:20:16.628 00:20:16.628 real 0m27.346s 00:20:16.628 user 0m50.186s 00:20:16.628 sys 0m8.350s 00:20:16.628 21:42:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:16.628 21:42:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:20:16.885 21:42:01 -- spdk/autotest.sh@299 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:20:16.885 21:42:01 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:20:16.885 21:42:01 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:16.885 21:42:01 -- common/autotest_common.sh@10 -- # set +x 00:20:16.885 ************************************ 00:20:16.885 START TEST keyring_file 00:20:16.885 ************************************ 00:20:16.885 21:42:01 keyring_file -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:20:16.885 * Looking for test storage... 00:20:16.885 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:20:16.885 21:42:01 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:20:16.885 21:42:01 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:16.885 21:42:01 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:20:16.885 21:42:01 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:16.885 21:42:01 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:16.885 21:42:01 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:16.885 21:42:01 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:16.885 21:42:01 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:16.885 21:42:01 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:16.885 21:42:01 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:16.885 21:42:01 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:16.885 21:42:01 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:16.885 21:42:01 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:16.885 21:42:01 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:20:16.885 21:42:01 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:20:16.885 21:42:01 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:16.885 21:42:01 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:16.885 21:42:01 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:16.885 21:42:01 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:16.885 21:42:01 keyring_file -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:16.885 21:42:01 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:16.885 21:42:01 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:16.885 21:42:01 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:16.885 21:42:01 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:16.885 21:42:01 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:16.885 21:42:01 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:16.885 21:42:01 keyring_file -- paths/export.sh@5 -- # export PATH 00:20:16.886 21:42:01 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:16.886 21:42:01 keyring_file -- nvmf/common.sh@47 -- # : 0 00:20:16.886 21:42:01 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:16.886 21:42:01 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:16.886 21:42:01 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:16.886 21:42:01 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:16.886 21:42:01 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:16.886 21:42:01 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:16.886 21:42:01 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:16.886 21:42:01 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:16.886 21:42:01 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:20:16.886 21:42:01 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:20:16.886 21:42:01 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:20:16.886 21:42:01 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:20:16.886 21:42:01 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:20:16.886 21:42:01 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:20:16.886 21:42:01 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:20:16.886 21:42:01 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:20:16.886 21:42:01 keyring_file -- keyring/common.sh@17 -- # name=key0 00:20:16.886 21:42:01 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:20:16.886 21:42:01 keyring_file -- keyring/common.sh@17 -- # digest=0 00:20:16.886 21:42:01 keyring_file -- keyring/common.sh@18 -- # mktemp 00:20:16.886 21:42:01 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.nAeYaYXHcG 00:20:16.886 21:42:01 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:20:16.886 21:42:01 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:20:16.886 21:42:01 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:20:16.886 21:42:01 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:16.886 21:42:01 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:20:16.886 21:42:01 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:20:16.886 21:42:01 keyring_file -- nvmf/common.sh@705 -- # python - 00:20:16.886 21:42:01 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.nAeYaYXHcG 00:20:16.886 21:42:01 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.nAeYaYXHcG 00:20:16.886 21:42:01 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.nAeYaYXHcG 00:20:16.886 21:42:01 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:20:16.886 21:42:01 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:20:16.886 21:42:01 keyring_file -- keyring/common.sh@17 -- # name=key1 00:20:16.886 21:42:01 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:20:16.886 21:42:01 keyring_file -- keyring/common.sh@17 -- # digest=0 00:20:16.886 21:42:01 keyring_file -- keyring/common.sh@18 -- # mktemp 00:20:16.886 21:42:01 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.uBtikb9rbF 00:20:16.886 21:42:01 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:20:16.886 21:42:01 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:20:16.886 21:42:01 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:20:16.886 21:42:01 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:16.886 21:42:01 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:20:16.886 21:42:01 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:20:16.886 21:42:01 keyring_file -- nvmf/common.sh@705 -- # python - 00:20:17.144 21:42:01 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.uBtikb9rbF 00:20:17.144 21:42:01 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.uBtikb9rbF 00:20:17.144 21:42:01 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.uBtikb9rbF 00:20:17.144 21:42:01 keyring_file -- keyring/file.sh@30 -- # tgtpid=84204 00:20:17.144 21:42:01 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:17.144 21:42:01 keyring_file -- keyring/file.sh@32 -- # waitforlisten 84204 00:20:17.144 21:42:01 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 84204 ']' 00:20:17.144 21:42:01 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:17.144 21:42:01 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:17.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:17.144 21:42:01 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:17.144 21:42:01 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:17.144 21:42:01 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:20:17.144 [2024-07-24 21:42:01.968574] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:20:17.144 [2024-07-24 21:42:01.969343] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84204 ] 00:20:17.144 [2024-07-24 21:42:02.107386] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:17.402 [2024-07-24 21:42:02.237501] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:17.402 [2024-07-24 21:42:02.299381] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:20:18.336 21:42:02 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:18.336 21:42:02 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:20:18.336 21:42:02 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:20:18.336 21:42:02 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.336 21:42:02 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:20:18.336 [2024-07-24 21:42:02.993602] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:18.336 null0 00:20:18.336 [2024-07-24 21:42:03.025594] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:18.336 [2024-07-24 21:42:03.025914] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:20:18.336 [2024-07-24 21:42:03.033555] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:18.336 21:42:03 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.336 21:42:03 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:20:18.336 21:42:03 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:20:18.336 21:42:03 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:20:18.336 21:42:03 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:18.336 21:42:03 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:18.336 21:42:03 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:18.336 21:42:03 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:18.336 21:42:03 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:20:18.336 21:42:03 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.336 21:42:03 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:20:18.336 [2024-07-24 21:42:03.045544] nvmf_rpc.c: 788:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:20:18.336 request: 00:20:18.336 { 00:20:18.336 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:20:18.336 "secure_channel": false, 00:20:18.336 "listen_address": { 00:20:18.336 "trtype": "tcp", 00:20:18.336 "traddr": "127.0.0.1", 00:20:18.336 "trsvcid": "4420" 00:20:18.336 }, 00:20:18.336 "method": "nvmf_subsystem_add_listener", 00:20:18.336 "req_id": 1 00:20:18.336 } 00:20:18.336 Got JSON-RPC error response 00:20:18.336 response: 00:20:18.336 { 00:20:18.336 "code": -32602, 00:20:18.336 "message": "Invalid parameters" 00:20:18.336 } 00:20:18.336 21:42:03 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:18.336 21:42:03 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:20:18.336 21:42:03 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:18.336 21:42:03 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:18.336 21:42:03 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:18.336 21:42:03 keyring_file -- keyring/file.sh@46 -- # bperfpid=84221 00:20:18.336 21:42:03 keyring_file -- keyring/file.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:20:18.336 21:42:03 keyring_file -- keyring/file.sh@48 -- # waitforlisten 84221 /var/tmp/bperf.sock 00:20:18.336 21:42:03 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 84221 ']' 00:20:18.336 21:42:03 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:18.336 21:42:03 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:18.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:18.336 21:42:03 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:18.336 21:42:03 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:18.336 21:42:03 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:20:18.336 [2024-07-24 21:42:03.108849] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:20:18.336 [2024-07-24 21:42:03.108941] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84221 ] 00:20:18.336 [2024-07-24 21:42:03.253925] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:18.594 [2024-07-24 21:42:03.386162] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:18.594 [2024-07-24 21:42:03.442694] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:20:19.159 21:42:04 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:19.159 21:42:04 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:20:19.159 21:42:04 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.nAeYaYXHcG 00:20:19.159 21:42:04 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.nAeYaYXHcG 00:20:19.417 21:42:04 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.uBtikb9rbF 00:20:19.417 21:42:04 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.uBtikb9rbF 00:20:19.675 21:42:04 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:20:19.675 21:42:04 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:20:19.675 21:42:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:19.675 21:42:04 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:19.675 21:42:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:20:19.934 21:42:04 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.nAeYaYXHcG == \/\t\m\p\/\t\m\p\.\n\A\e\Y\a\Y\X\H\c\G ]] 00:20:19.934 21:42:04 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:20:19.934 21:42:04 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:20:19.934 21:42:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:19.934 21:42:04 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:19.934 21:42:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:20:20.191 21:42:05 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.uBtikb9rbF == \/\t\m\p\/\t\m\p\.\u\B\t\i\k\b\9\r\b\F ]] 00:20:20.191 21:42:05 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:20:20.191 21:42:05 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:20:20.191 21:42:05 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:20:20.192 21:42:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:20.192 21:42:05 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:20.192 21:42:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:20:20.450 21:42:05 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:20:20.450 21:42:05 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:20:20.450 21:42:05 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:20:20.450 21:42:05 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:20:20.450 21:42:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:20.450 21:42:05 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:20.450 21:42:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:20:21.016 21:42:05 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:20:21.016 21:42:05 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:20:21.016 21:42:05 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:20:21.016 [2024-07-24 21:42:06.009115] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:21.274 nvme0n1 00:20:21.274 21:42:06 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:20:21.274 21:42:06 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:20:21.274 21:42:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:20:21.274 21:42:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:20:21.274 21:42:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:21.274 21:42:06 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:21.533 21:42:06 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:20:21.533 21:42:06 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:20:21.533 21:42:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:20:21.533 21:42:06 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:20:21.533 21:42:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:20:21.533 21:42:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:21.533 21:42:06 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:21.792 21:42:06 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:20:21.792 21:42:06 keyring_file -- keyring/file.sh@62 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:21.792 Running I/O for 1 seconds... 00:20:23.181 00:20:23.181 Latency(us) 00:20:23.181 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:23.181 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:20:23.181 nvme0n1 : 1.01 11325.47 44.24 0.00 0.00 11265.58 5540.77 24427.05 00:20:23.181 =================================================================================================================== 00:20:23.181 Total : 11325.47 44.24 0.00 0.00 11265.58 5540.77 24427.05 00:20:23.181 0 00:20:23.181 21:42:07 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:20:23.181 21:42:07 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:20:23.181 21:42:08 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:20:23.181 21:42:08 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:20:23.181 21:42:08 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:20:23.181 21:42:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:23.181 21:42:08 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:23.181 21:42:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:20:23.439 21:42:08 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:20:23.439 21:42:08 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:20:23.439 21:42:08 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:20:23.439 21:42:08 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:20:23.439 21:42:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:23.439 21:42:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:20:23.439 21:42:08 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:23.698 21:42:08 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:20:23.698 21:42:08 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:20:23.698 21:42:08 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:20:23.698 21:42:08 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:20:23.698 21:42:08 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:20:23.698 21:42:08 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:23.698 21:42:08 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:20:23.698 21:42:08 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:23.698 21:42:08 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:20:23.698 21:42:08 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:20:23.956 [2024-07-24 21:42:08.821276] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:23.956 [2024-07-24 21:42:08.821882] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f744f0 (107): Transport endpoint is not connected 00:20:23.956 [2024-07-24 21:42:08.822868] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f744f0 (9): Bad file descriptor 00:20:23.956 [2024-07-24 21:42:08.823864] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:23.956 [2024-07-24 21:42:08.823889] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:20:23.956 [2024-07-24 21:42:08.823900] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:23.956 request: 00:20:23.956 { 00:20:23.956 "name": "nvme0", 00:20:23.956 "trtype": "tcp", 00:20:23.956 "traddr": "127.0.0.1", 00:20:23.956 "adrfam": "ipv4", 00:20:23.956 "trsvcid": "4420", 00:20:23.956 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:23.956 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:23.956 "prchk_reftag": false, 00:20:23.956 "prchk_guard": false, 00:20:23.956 "hdgst": false, 00:20:23.956 "ddgst": false, 00:20:23.956 "psk": "key1", 00:20:23.956 "method": "bdev_nvme_attach_controller", 00:20:23.956 "req_id": 1 00:20:23.956 } 00:20:23.956 Got JSON-RPC error response 00:20:23.956 response: 00:20:23.956 { 00:20:23.956 "code": -5, 00:20:23.956 "message": "Input/output error" 00:20:23.956 } 00:20:23.956 21:42:08 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:20:23.956 21:42:08 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:23.956 21:42:08 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:23.956 21:42:08 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:23.956 21:42:08 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:20:23.956 21:42:08 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:20:23.956 21:42:08 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:20:23.956 21:42:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:23.956 21:42:08 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:23.956 21:42:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:20:24.214 21:42:09 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:20:24.215 21:42:09 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:20:24.215 21:42:09 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:20:24.215 21:42:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:20:24.215 21:42:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:24.215 21:42:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:20:24.215 21:42:09 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:24.474 21:42:09 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:20:24.474 21:42:09 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:20:24.474 21:42:09 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:20:24.732 21:42:09 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:20:24.733 21:42:09 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:20:24.991 21:42:09 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:20:24.991 21:42:09 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:24.991 21:42:09 keyring_file -- keyring/file.sh@77 -- # jq length 00:20:25.250 21:42:10 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:20:25.250 21:42:10 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.nAeYaYXHcG 00:20:25.250 21:42:10 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.nAeYaYXHcG 00:20:25.250 21:42:10 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:20:25.250 21:42:10 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.nAeYaYXHcG 00:20:25.250 21:42:10 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:20:25.250 21:42:10 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:25.250 21:42:10 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:20:25.250 21:42:10 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:25.250 21:42:10 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.nAeYaYXHcG 00:20:25.250 21:42:10 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.nAeYaYXHcG 00:20:25.509 [2024-07-24 21:42:10.382264] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.nAeYaYXHcG': 0100660 00:20:25.509 [2024-07-24 21:42:10.382321] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:20:25.509 request: 00:20:25.509 { 00:20:25.509 "name": "key0", 00:20:25.509 "path": "/tmp/tmp.nAeYaYXHcG", 00:20:25.509 "method": "keyring_file_add_key", 00:20:25.509 "req_id": 1 00:20:25.509 } 00:20:25.509 Got JSON-RPC error response 00:20:25.509 response: 00:20:25.509 { 00:20:25.509 "code": -1, 00:20:25.509 "message": "Operation not permitted" 00:20:25.509 } 00:20:25.509 21:42:10 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:20:25.509 21:42:10 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:25.509 21:42:10 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:25.509 21:42:10 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:25.509 21:42:10 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.nAeYaYXHcG 00:20:25.509 21:42:10 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.nAeYaYXHcG 00:20:25.509 21:42:10 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.nAeYaYXHcG 00:20:25.768 21:42:10 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.nAeYaYXHcG 00:20:25.768 21:42:10 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:20:25.768 21:42:10 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:20:25.768 21:42:10 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:20:25.768 21:42:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:25.768 21:42:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:20:25.768 21:42:10 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:26.028 21:42:10 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:20:26.028 21:42:10 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:20:26.028 21:42:10 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:20:26.028 21:42:10 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:20:26.028 21:42:10 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:20:26.028 21:42:10 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:26.028 21:42:10 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:20:26.028 21:42:10 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:26.028 21:42:10 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:20:26.028 21:42:10 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:20:26.286 [2024-07-24 21:42:11.230479] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.nAeYaYXHcG': No such file or directory 00:20:26.286 [2024-07-24 21:42:11.230535] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:20:26.286 [2024-07-24 21:42:11.230562] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:20:26.286 [2024-07-24 21:42:11.230571] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:20:26.286 [2024-07-24 21:42:11.230581] bdev_nvme.c:6296:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:20:26.286 request: 00:20:26.286 { 00:20:26.286 "name": "nvme0", 00:20:26.286 "trtype": "tcp", 00:20:26.286 "traddr": "127.0.0.1", 00:20:26.286 "adrfam": "ipv4", 00:20:26.286 "trsvcid": "4420", 00:20:26.286 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:26.286 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:26.286 "prchk_reftag": false, 00:20:26.286 "prchk_guard": false, 00:20:26.286 "hdgst": false, 00:20:26.286 "ddgst": false, 00:20:26.286 "psk": "key0", 00:20:26.286 "method": "bdev_nvme_attach_controller", 00:20:26.286 "req_id": 1 00:20:26.286 } 00:20:26.286 Got JSON-RPC error response 00:20:26.286 response: 00:20:26.286 { 00:20:26.286 "code": -19, 00:20:26.286 "message": "No such device" 00:20:26.286 } 00:20:26.286 21:42:11 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:20:26.286 21:42:11 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:26.286 21:42:11 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:26.286 21:42:11 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:26.286 21:42:11 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:20:26.286 21:42:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:20:26.852 21:42:11 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:20:26.852 21:42:11 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:20:26.852 21:42:11 keyring_file -- keyring/common.sh@17 -- # name=key0 00:20:26.852 21:42:11 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:20:26.852 21:42:11 keyring_file -- keyring/common.sh@17 -- # digest=0 00:20:26.852 21:42:11 keyring_file -- keyring/common.sh@18 -- # mktemp 00:20:26.852 21:42:11 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.WfTzR97Mn9 00:20:26.852 21:42:11 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:20:26.852 21:42:11 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:20:26.852 21:42:11 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:20:26.852 21:42:11 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:26.852 21:42:11 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:20:26.852 21:42:11 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:20:26.852 21:42:11 keyring_file -- nvmf/common.sh@705 -- # python - 00:20:26.852 21:42:11 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.WfTzR97Mn9 00:20:26.852 21:42:11 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.WfTzR97Mn9 00:20:26.852 21:42:11 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.WfTzR97Mn9 00:20:26.852 21:42:11 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.WfTzR97Mn9 00:20:26.853 21:42:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.WfTzR97Mn9 00:20:27.111 21:42:11 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:20:27.111 21:42:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:20:27.369 nvme0n1 00:20:27.369 21:42:12 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:20:27.369 21:42:12 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:20:27.369 21:42:12 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:20:27.369 21:42:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:20:27.369 21:42:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:27.369 21:42:12 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:27.627 21:42:12 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:20:27.627 21:42:12 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:20:27.627 21:42:12 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:20:27.886 21:42:12 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:20:27.886 21:42:12 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:20:27.886 21:42:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:20:27.886 21:42:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:27.886 21:42:12 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:28.144 21:42:13 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:20:28.144 21:42:13 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:20:28.144 21:42:13 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:20:28.144 21:42:13 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:20:28.144 21:42:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:20:28.144 21:42:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:28.144 21:42:13 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:28.403 21:42:13 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:20:28.403 21:42:13 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:20:28.403 21:42:13 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:20:28.970 21:42:13 keyring_file -- keyring/file.sh@104 -- # jq length 00:20:28.970 21:42:13 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:20:28.970 21:42:13 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:29.230 21:42:14 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:20:29.230 21:42:14 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.WfTzR97Mn9 00:20:29.230 21:42:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.WfTzR97Mn9 00:20:29.489 21:42:14 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.uBtikb9rbF 00:20:29.489 21:42:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.uBtikb9rbF 00:20:29.748 21:42:14 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:20:29.748 21:42:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:20:30.006 nvme0n1 00:20:30.006 21:42:14 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:20:30.006 21:42:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:20:30.265 21:42:15 keyring_file -- keyring/file.sh@112 -- # config='{ 00:20:30.265 "subsystems": [ 00:20:30.265 { 00:20:30.265 "subsystem": "keyring", 00:20:30.265 "config": [ 00:20:30.265 { 00:20:30.265 "method": "keyring_file_add_key", 00:20:30.265 "params": { 00:20:30.265 "name": "key0", 00:20:30.265 "path": "/tmp/tmp.WfTzR97Mn9" 00:20:30.265 } 00:20:30.265 }, 00:20:30.265 { 00:20:30.265 "method": "keyring_file_add_key", 00:20:30.265 "params": { 00:20:30.265 "name": "key1", 00:20:30.265 "path": "/tmp/tmp.uBtikb9rbF" 00:20:30.265 } 00:20:30.265 } 00:20:30.265 ] 00:20:30.265 }, 00:20:30.265 { 00:20:30.265 "subsystem": "iobuf", 00:20:30.265 "config": [ 00:20:30.265 { 00:20:30.265 "method": "iobuf_set_options", 00:20:30.265 "params": { 00:20:30.265 "small_pool_count": 8192, 00:20:30.265 "large_pool_count": 1024, 00:20:30.265 "small_bufsize": 8192, 00:20:30.265 "large_bufsize": 135168 00:20:30.265 } 00:20:30.265 } 00:20:30.265 ] 00:20:30.265 }, 00:20:30.265 { 00:20:30.265 "subsystem": "sock", 00:20:30.265 "config": [ 00:20:30.265 { 00:20:30.265 "method": "sock_set_default_impl", 00:20:30.266 "params": { 00:20:30.266 "impl_name": "uring" 00:20:30.266 } 00:20:30.266 }, 00:20:30.266 { 00:20:30.266 "method": "sock_impl_set_options", 00:20:30.266 "params": { 00:20:30.266 "impl_name": "ssl", 00:20:30.266 "recv_buf_size": 4096, 00:20:30.266 "send_buf_size": 4096, 00:20:30.266 "enable_recv_pipe": true, 00:20:30.266 "enable_quickack": false, 00:20:30.266 "enable_placement_id": 0, 00:20:30.266 "enable_zerocopy_send_server": true, 00:20:30.266 "enable_zerocopy_send_client": false, 00:20:30.266 "zerocopy_threshold": 0, 00:20:30.266 "tls_version": 0, 00:20:30.266 "enable_ktls": false 00:20:30.266 } 00:20:30.266 }, 00:20:30.266 { 00:20:30.266 "method": "sock_impl_set_options", 00:20:30.266 "params": { 00:20:30.266 "impl_name": "posix", 00:20:30.266 "recv_buf_size": 2097152, 00:20:30.266 "send_buf_size": 2097152, 00:20:30.266 "enable_recv_pipe": true, 00:20:30.266 "enable_quickack": false, 00:20:30.266 "enable_placement_id": 0, 00:20:30.266 "enable_zerocopy_send_server": true, 00:20:30.266 "enable_zerocopy_send_client": false, 00:20:30.266 "zerocopy_threshold": 0, 00:20:30.266 "tls_version": 0, 00:20:30.266 "enable_ktls": false 00:20:30.266 } 00:20:30.266 }, 00:20:30.266 { 00:20:30.266 "method": "sock_impl_set_options", 00:20:30.266 "params": { 00:20:30.266 "impl_name": "uring", 00:20:30.266 "recv_buf_size": 2097152, 00:20:30.266 "send_buf_size": 2097152, 00:20:30.266 "enable_recv_pipe": true, 00:20:30.266 "enable_quickack": false, 00:20:30.266 "enable_placement_id": 0, 00:20:30.266 "enable_zerocopy_send_server": false, 00:20:30.266 "enable_zerocopy_send_client": false, 00:20:30.266 "zerocopy_threshold": 0, 00:20:30.266 "tls_version": 0, 00:20:30.266 "enable_ktls": false 00:20:30.266 } 00:20:30.266 } 00:20:30.266 ] 00:20:30.266 }, 00:20:30.266 { 00:20:30.266 "subsystem": "vmd", 00:20:30.266 "config": [] 00:20:30.266 }, 00:20:30.266 { 00:20:30.266 "subsystem": "accel", 00:20:30.266 "config": [ 00:20:30.266 { 00:20:30.266 "method": "accel_set_options", 00:20:30.266 "params": { 00:20:30.266 "small_cache_size": 128, 00:20:30.266 "large_cache_size": 16, 00:20:30.266 "task_count": 2048, 00:20:30.266 "sequence_count": 2048, 00:20:30.266 "buf_count": 2048 00:20:30.266 } 00:20:30.266 } 00:20:30.266 ] 00:20:30.266 }, 00:20:30.266 { 00:20:30.266 "subsystem": "bdev", 00:20:30.266 "config": [ 00:20:30.266 { 00:20:30.266 "method": "bdev_set_options", 00:20:30.266 "params": { 00:20:30.266 "bdev_io_pool_size": 65535, 00:20:30.266 "bdev_io_cache_size": 256, 00:20:30.266 "bdev_auto_examine": true, 00:20:30.266 "iobuf_small_cache_size": 128, 00:20:30.266 "iobuf_large_cache_size": 16 00:20:30.266 } 00:20:30.266 }, 00:20:30.266 { 00:20:30.266 "method": "bdev_raid_set_options", 00:20:30.266 "params": { 00:20:30.266 "process_window_size_kb": 1024, 00:20:30.266 "process_max_bandwidth_mb_sec": 0 00:20:30.266 } 00:20:30.266 }, 00:20:30.266 { 00:20:30.266 "method": "bdev_iscsi_set_options", 00:20:30.266 "params": { 00:20:30.266 "timeout_sec": 30 00:20:30.266 } 00:20:30.266 }, 00:20:30.266 { 00:20:30.266 "method": "bdev_nvme_set_options", 00:20:30.266 "params": { 00:20:30.266 "action_on_timeout": "none", 00:20:30.266 "timeout_us": 0, 00:20:30.266 "timeout_admin_us": 0, 00:20:30.266 "keep_alive_timeout_ms": 10000, 00:20:30.266 "arbitration_burst": 0, 00:20:30.266 "low_priority_weight": 0, 00:20:30.266 "medium_priority_weight": 0, 00:20:30.266 "high_priority_weight": 0, 00:20:30.266 "nvme_adminq_poll_period_us": 10000, 00:20:30.266 "nvme_ioq_poll_period_us": 0, 00:20:30.266 "io_queue_requests": 512, 00:20:30.266 "delay_cmd_submit": true, 00:20:30.266 "transport_retry_count": 4, 00:20:30.266 "bdev_retry_count": 3, 00:20:30.266 "transport_ack_timeout": 0, 00:20:30.266 "ctrlr_loss_timeout_sec": 0, 00:20:30.266 "reconnect_delay_sec": 0, 00:20:30.266 "fast_io_fail_timeout_sec": 0, 00:20:30.266 "disable_auto_failback": false, 00:20:30.266 "generate_uuids": false, 00:20:30.266 "transport_tos": 0, 00:20:30.266 "nvme_error_stat": false, 00:20:30.266 "rdma_srq_size": 0, 00:20:30.266 "io_path_stat": false, 00:20:30.266 "allow_accel_sequence": false, 00:20:30.266 "rdma_max_cq_size": 0, 00:20:30.266 "rdma_cm_event_timeout_ms": 0, 00:20:30.266 "dhchap_digests": [ 00:20:30.266 "sha256", 00:20:30.266 "sha384", 00:20:30.266 "sha512" 00:20:30.266 ], 00:20:30.266 "dhchap_dhgroups": [ 00:20:30.266 "null", 00:20:30.266 "ffdhe2048", 00:20:30.266 "ffdhe3072", 00:20:30.266 "ffdhe4096", 00:20:30.266 "ffdhe6144", 00:20:30.266 "ffdhe8192" 00:20:30.266 ] 00:20:30.266 } 00:20:30.266 }, 00:20:30.266 { 00:20:30.266 "method": "bdev_nvme_attach_controller", 00:20:30.266 "params": { 00:20:30.266 "name": "nvme0", 00:20:30.266 "trtype": "TCP", 00:20:30.266 "adrfam": "IPv4", 00:20:30.266 "traddr": "127.0.0.1", 00:20:30.266 "trsvcid": "4420", 00:20:30.266 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:30.266 "prchk_reftag": false, 00:20:30.266 "prchk_guard": false, 00:20:30.266 "ctrlr_loss_timeout_sec": 0, 00:20:30.266 "reconnect_delay_sec": 0, 00:20:30.266 "fast_io_fail_timeout_sec": 0, 00:20:30.266 "psk": "key0", 00:20:30.266 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:30.266 "hdgst": false, 00:20:30.266 "ddgst": false 00:20:30.266 } 00:20:30.266 }, 00:20:30.266 { 00:20:30.266 "method": "bdev_nvme_set_hotplug", 00:20:30.266 "params": { 00:20:30.266 "period_us": 100000, 00:20:30.266 "enable": false 00:20:30.266 } 00:20:30.266 }, 00:20:30.266 { 00:20:30.266 "method": "bdev_wait_for_examine" 00:20:30.266 } 00:20:30.266 ] 00:20:30.266 }, 00:20:30.266 { 00:20:30.266 "subsystem": "nbd", 00:20:30.266 "config": [] 00:20:30.266 } 00:20:30.266 ] 00:20:30.266 }' 00:20:30.266 21:42:15 keyring_file -- keyring/file.sh@114 -- # killprocess 84221 00:20:30.266 21:42:15 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 84221 ']' 00:20:30.266 21:42:15 keyring_file -- common/autotest_common.sh@954 -- # kill -0 84221 00:20:30.266 21:42:15 keyring_file -- common/autotest_common.sh@955 -- # uname 00:20:30.266 21:42:15 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:30.266 21:42:15 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84221 00:20:30.266 21:42:15 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:30.266 21:42:15 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:30.266 21:42:15 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84221' 00:20:30.266 killing process with pid 84221 00:20:30.266 Received shutdown signal, test time was about 1.000000 seconds 00:20:30.266 00:20:30.266 Latency(us) 00:20:30.266 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:30.266 =================================================================================================================== 00:20:30.266 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:30.266 21:42:15 keyring_file -- common/autotest_common.sh@969 -- # kill 84221 00:20:30.266 21:42:15 keyring_file -- common/autotest_common.sh@974 -- # wait 84221 00:20:30.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:30.526 21:42:15 keyring_file -- keyring/file.sh@117 -- # bperfpid=84476 00:20:30.526 21:42:15 keyring_file -- keyring/file.sh@119 -- # waitforlisten 84476 /var/tmp/bperf.sock 00:20:30.526 21:42:15 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 84476 ']' 00:20:30.526 21:42:15 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:30.526 21:42:15 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:30.526 21:42:15 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:30.526 21:42:15 keyring_file -- keyring/file.sh@115 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:20:30.526 21:42:15 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:30.526 21:42:15 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:20:30.526 21:42:15 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:20:30.526 "subsystems": [ 00:20:30.526 { 00:20:30.526 "subsystem": "keyring", 00:20:30.526 "config": [ 00:20:30.526 { 00:20:30.526 "method": "keyring_file_add_key", 00:20:30.526 "params": { 00:20:30.526 "name": "key0", 00:20:30.526 "path": "/tmp/tmp.WfTzR97Mn9" 00:20:30.526 } 00:20:30.526 }, 00:20:30.526 { 00:20:30.526 "method": "keyring_file_add_key", 00:20:30.526 "params": { 00:20:30.526 "name": "key1", 00:20:30.526 "path": "/tmp/tmp.uBtikb9rbF" 00:20:30.526 } 00:20:30.526 } 00:20:30.526 ] 00:20:30.526 }, 00:20:30.526 { 00:20:30.526 "subsystem": "iobuf", 00:20:30.526 "config": [ 00:20:30.526 { 00:20:30.526 "method": "iobuf_set_options", 00:20:30.526 "params": { 00:20:30.526 "small_pool_count": 8192, 00:20:30.526 "large_pool_count": 1024, 00:20:30.526 "small_bufsize": 8192, 00:20:30.526 "large_bufsize": 135168 00:20:30.526 } 00:20:30.526 } 00:20:30.526 ] 00:20:30.526 }, 00:20:30.526 { 00:20:30.526 "subsystem": "sock", 00:20:30.526 "config": [ 00:20:30.526 { 00:20:30.526 "method": "sock_set_default_impl", 00:20:30.526 "params": { 00:20:30.526 "impl_name": "uring" 00:20:30.526 } 00:20:30.526 }, 00:20:30.526 { 00:20:30.526 "method": "sock_impl_set_options", 00:20:30.526 "params": { 00:20:30.526 "impl_name": "ssl", 00:20:30.526 "recv_buf_size": 4096, 00:20:30.526 "send_buf_size": 4096, 00:20:30.526 "enable_recv_pipe": true, 00:20:30.526 "enable_quickack": false, 00:20:30.526 "enable_placement_id": 0, 00:20:30.526 "enable_zerocopy_send_server": true, 00:20:30.526 "enable_zerocopy_send_client": false, 00:20:30.526 "zerocopy_threshold": 0, 00:20:30.526 "tls_version": 0, 00:20:30.526 "enable_ktls": false 00:20:30.526 } 00:20:30.526 }, 00:20:30.526 { 00:20:30.526 "method": "sock_impl_set_options", 00:20:30.526 "params": { 00:20:30.526 "impl_name": "posix", 00:20:30.526 "recv_buf_size": 2097152, 00:20:30.526 "send_buf_size": 2097152, 00:20:30.526 "enable_recv_pipe": true, 00:20:30.526 "enable_quickack": false, 00:20:30.526 "enable_placement_id": 0, 00:20:30.526 "enable_zerocopy_send_server": true, 00:20:30.526 "enable_zerocopy_send_client": false, 00:20:30.526 "zerocopy_threshold": 0, 00:20:30.526 "tls_version": 0, 00:20:30.526 "enable_ktls": false 00:20:30.526 } 00:20:30.526 }, 00:20:30.526 { 00:20:30.526 "method": "sock_impl_set_options", 00:20:30.526 "params": { 00:20:30.526 "impl_name": "uring", 00:20:30.526 "recv_buf_size": 2097152, 00:20:30.526 "send_buf_size": 2097152, 00:20:30.526 "enable_recv_pipe": true, 00:20:30.526 "enable_quickack": false, 00:20:30.526 "enable_placement_id": 0, 00:20:30.526 "enable_zerocopy_send_server": false, 00:20:30.526 "enable_zerocopy_send_client": false, 00:20:30.526 "zerocopy_threshold": 0, 00:20:30.526 "tls_version": 0, 00:20:30.526 "enable_ktls": false 00:20:30.526 } 00:20:30.526 } 00:20:30.526 ] 00:20:30.526 }, 00:20:30.526 { 00:20:30.526 "subsystem": "vmd", 00:20:30.526 "config": [] 00:20:30.526 }, 00:20:30.526 { 00:20:30.526 "subsystem": "accel", 00:20:30.526 "config": [ 00:20:30.526 { 00:20:30.526 "method": "accel_set_options", 00:20:30.526 "params": { 00:20:30.526 "small_cache_size": 128, 00:20:30.526 "large_cache_size": 16, 00:20:30.526 "task_count": 2048, 00:20:30.526 "sequence_count": 2048, 00:20:30.527 "buf_count": 2048 00:20:30.527 } 00:20:30.527 } 00:20:30.527 ] 00:20:30.527 }, 00:20:30.527 { 00:20:30.527 "subsystem": "bdev", 00:20:30.527 "config": [ 00:20:30.527 { 00:20:30.527 "method": "bdev_set_options", 00:20:30.527 "params": { 00:20:30.527 "bdev_io_pool_size": 65535, 00:20:30.527 "bdev_io_cache_size": 256, 00:20:30.527 "bdev_auto_examine": true, 00:20:30.527 "iobuf_small_cache_size": 128, 00:20:30.527 "iobuf_large_cache_size": 16 00:20:30.527 } 00:20:30.527 }, 00:20:30.527 { 00:20:30.527 "method": "bdev_raid_set_options", 00:20:30.527 "params": { 00:20:30.527 "process_window_size_kb": 1024, 00:20:30.527 "process_max_bandwidth_mb_sec": 0 00:20:30.527 } 00:20:30.527 }, 00:20:30.527 { 00:20:30.527 "method": "bdev_iscsi_set_options", 00:20:30.527 "params": { 00:20:30.527 "timeout_sec": 30 00:20:30.527 } 00:20:30.527 }, 00:20:30.527 { 00:20:30.527 "method": "bdev_nvme_set_options", 00:20:30.527 "params": { 00:20:30.527 "action_on_timeout": "none", 00:20:30.527 "timeout_us": 0, 00:20:30.527 "timeout_admin_us": 0, 00:20:30.527 "keep_alive_timeout_ms": 10000, 00:20:30.527 "arbitration_burst": 0, 00:20:30.527 "low_priority_weight": 0, 00:20:30.527 "medium_priority_weight": 0, 00:20:30.527 "high_priority_weight": 0, 00:20:30.527 "nvme_adminq_poll_period_us": 10000, 00:20:30.527 "nvme_ioq_poll_period_us": 0, 00:20:30.527 "io_queue_requests": 512, 00:20:30.527 "delay_cmd_submit": true, 00:20:30.527 "transport_retry_count": 4, 00:20:30.527 "bdev_retry_count": 3, 00:20:30.527 "transport_ack_timeout": 0, 00:20:30.527 "ctrlr_loss_timeout_sec": 0, 00:20:30.527 "reconnect_delay_sec": 0, 00:20:30.527 "fast_io_fail_timeout_sec": 0, 00:20:30.527 "disable_auto_failback": false, 00:20:30.527 "generate_uuids": false, 00:20:30.527 "transport_tos": 0, 00:20:30.527 "nvme_error_stat": false, 00:20:30.527 "rdma_srq_size": 0, 00:20:30.527 "io_path_stat": false, 00:20:30.527 "allow_accel_sequence": false, 00:20:30.527 "rdma_max_cq_size": 0, 00:20:30.527 "rdma_cm_event_timeout_ms": 0, 00:20:30.527 "dhchap_digests": [ 00:20:30.527 "sha256", 00:20:30.527 "sha384", 00:20:30.527 "sha512" 00:20:30.527 ], 00:20:30.527 "dhchap_dhgroups": [ 00:20:30.527 "null", 00:20:30.527 "ffdhe2048", 00:20:30.527 "ffdhe3072", 00:20:30.527 "ffdhe4096", 00:20:30.527 "ffdhe6144", 00:20:30.527 "ffdhe8192" 00:20:30.527 ] 00:20:30.527 } 00:20:30.527 }, 00:20:30.527 { 00:20:30.527 "method": "bdev_nvme_attach_controller", 00:20:30.527 "params": { 00:20:30.527 "name": "nvme0", 00:20:30.527 "trtype": "TCP", 00:20:30.527 "adrfam": "IPv4", 00:20:30.527 "traddr": "127.0.0.1", 00:20:30.527 "trsvcid": "4420", 00:20:30.527 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:30.527 "prchk_reftag": false, 00:20:30.527 "prchk_guard": false, 00:20:30.527 "ctrlr_loss_timeout_sec": 0, 00:20:30.527 "reconnect_delay_sec": 0, 00:20:30.527 "fast_io_fail_timeout_sec": 0, 00:20:30.527 "psk": "key0", 00:20:30.527 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:30.527 "hdgst": false, 00:20:30.527 "ddgst": false 00:20:30.527 } 00:20:30.527 }, 00:20:30.527 { 00:20:30.527 "method": "bdev_nvme_set_hotplug", 00:20:30.527 "params": { 00:20:30.527 "period_us": 100000, 00:20:30.527 "enable": false 00:20:30.527 } 00:20:30.527 }, 00:20:30.527 { 00:20:30.527 "method": "bdev_wait_for_examine" 00:20:30.527 } 00:20:30.527 ] 00:20:30.527 }, 00:20:30.527 { 00:20:30.527 "subsystem": "nbd", 00:20:30.527 "config": [] 00:20:30.527 } 00:20:30.527 ] 00:20:30.527 }' 00:20:30.527 [2024-07-24 21:42:15.459452] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:20:30.527 [2024-07-24 21:42:15.459804] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84476 ] 00:20:30.786 [2024-07-24 21:42:15.595896] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:30.786 [2024-07-24 21:42:15.727042] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:31.045 [2024-07-24 21:42:15.862338] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:20:31.045 [2024-07-24 21:42:15.918003] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:31.612 21:42:16 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:31.612 21:42:16 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:20:31.612 21:42:16 keyring_file -- keyring/file.sh@120 -- # jq length 00:20:31.612 21:42:16 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:20:31.612 21:42:16 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:31.870 21:42:16 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:20:31.870 21:42:16 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:20:31.870 21:42:16 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:20:31.870 21:42:16 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:20:31.870 21:42:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:31.870 21:42:16 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:31.870 21:42:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:20:32.129 21:42:16 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:20:32.129 21:42:16 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:20:32.129 21:42:16 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:20:32.129 21:42:16 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:20:32.129 21:42:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:32.129 21:42:16 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:32.129 21:42:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:20:32.387 21:42:17 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:20:32.387 21:42:17 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:20:32.387 21:42:17 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:20:32.387 21:42:17 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:20:32.646 21:42:17 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:20:32.646 21:42:17 keyring_file -- keyring/file.sh@1 -- # cleanup 00:20:32.646 21:42:17 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.WfTzR97Mn9 /tmp/tmp.uBtikb9rbF 00:20:32.646 21:42:17 keyring_file -- keyring/file.sh@20 -- # killprocess 84476 00:20:32.646 21:42:17 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 84476 ']' 00:20:32.646 21:42:17 keyring_file -- common/autotest_common.sh@954 -- # kill -0 84476 00:20:32.646 21:42:17 keyring_file -- common/autotest_common.sh@955 -- # uname 00:20:32.646 21:42:17 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:32.646 21:42:17 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84476 00:20:32.646 killing process with pid 84476 00:20:32.646 Received shutdown signal, test time was about 1.000000 seconds 00:20:32.646 00:20:32.646 Latency(us) 00:20:32.646 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:32.646 =================================================================================================================== 00:20:32.646 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:32.646 21:42:17 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:32.646 21:42:17 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:32.646 21:42:17 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84476' 00:20:32.646 21:42:17 keyring_file -- common/autotest_common.sh@969 -- # kill 84476 00:20:32.646 21:42:17 keyring_file -- common/autotest_common.sh@974 -- # wait 84476 00:20:32.905 21:42:17 keyring_file -- keyring/file.sh@21 -- # killprocess 84204 00:20:32.905 21:42:17 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 84204 ']' 00:20:32.905 21:42:17 keyring_file -- common/autotest_common.sh@954 -- # kill -0 84204 00:20:32.905 21:42:17 keyring_file -- common/autotest_common.sh@955 -- # uname 00:20:32.905 21:42:17 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:32.905 21:42:17 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84204 00:20:32.905 21:42:17 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:32.905 killing process with pid 84204 00:20:32.905 21:42:17 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:32.905 21:42:17 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84204' 00:20:32.905 21:42:17 keyring_file -- common/autotest_common.sh@969 -- # kill 84204 00:20:32.905 [2024-07-24 21:42:17.827903] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:32.905 21:42:17 keyring_file -- common/autotest_common.sh@974 -- # wait 84204 00:20:33.472 ************************************ 00:20:33.472 END TEST keyring_file 00:20:33.472 ************************************ 00:20:33.472 00:20:33.472 real 0m16.544s 00:20:33.472 user 0m41.345s 00:20:33.472 sys 0m3.217s 00:20:33.472 21:42:18 keyring_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:33.472 21:42:18 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:20:33.472 21:42:18 -- spdk/autotest.sh@300 -- # [[ y == y ]] 00:20:33.472 21:42:18 -- spdk/autotest.sh@301 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:20:33.472 21:42:18 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:20:33.472 21:42:18 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:33.472 21:42:18 -- common/autotest_common.sh@10 -- # set +x 00:20:33.472 ************************************ 00:20:33.472 START TEST keyring_linux 00:20:33.472 ************************************ 00:20:33.472 21:42:18 keyring_linux -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:20:33.472 * Looking for test storage... 00:20:33.472 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:20:33.472 21:42:18 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:20:33.472 21:42:18 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:33.472 21:42:18 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:20:33.472 21:42:18 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:33.472 21:42:18 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:33.472 21:42:18 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:33.472 21:42:18 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:33.472 21:42:18 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:33.472 21:42:18 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:33.472 21:42:18 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:33.472 21:42:18 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:33.472 21:42:18 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:33.472 21:42:18 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:33.472 21:42:18 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:20:33.472 21:42:18 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=987211d5-ddc7-4d0a-8ba2-cf43288d1158 00:20:33.472 21:42:18 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:33.472 21:42:18 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:33.472 21:42:18 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:33.472 21:42:18 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:33.472 21:42:18 keyring_linux -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:33.472 21:42:18 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:33.472 21:42:18 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:33.472 21:42:18 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:33.472 21:42:18 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:33.472 21:42:18 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:33.472 21:42:18 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:33.472 21:42:18 keyring_linux -- paths/export.sh@5 -- # export PATH 00:20:33.472 21:42:18 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:33.472 21:42:18 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:20:33.472 21:42:18 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:33.472 21:42:18 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:33.472 21:42:18 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:33.472 21:42:18 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:33.472 21:42:18 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:33.472 21:42:18 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:33.472 21:42:18 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:33.472 21:42:18 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:33.472 21:42:18 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:20:33.472 21:42:18 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:20:33.472 21:42:18 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:20:33.472 21:42:18 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:20:33.472 21:42:18 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:20:33.472 21:42:18 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:20:33.472 21:42:18 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:20:33.472 21:42:18 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:20:33.472 21:42:18 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:20:33.472 21:42:18 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:20:33.472 21:42:18 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:20:33.472 21:42:18 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:20:33.472 21:42:18 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:20:33.472 21:42:18 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:20:33.472 21:42:18 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:20:33.472 21:42:18 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:33.472 21:42:18 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:20:33.472 21:42:18 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:20:33.472 21:42:18 keyring_linux -- nvmf/common.sh@705 -- # python - 00:20:33.472 21:42:18 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:20:33.472 /tmp/:spdk-test:key0 00:20:33.472 21:42:18 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:20:33.472 21:42:18 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:20:33.472 21:42:18 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:20:33.472 21:42:18 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:20:33.472 21:42:18 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:20:33.472 21:42:18 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:20:33.472 21:42:18 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:20:33.472 21:42:18 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:20:33.472 21:42:18 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:20:33.472 21:42:18 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:20:33.472 21:42:18 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:33.472 21:42:18 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:20:33.472 21:42:18 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:20:33.472 21:42:18 keyring_linux -- nvmf/common.sh@705 -- # python - 00:20:33.472 21:42:18 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:20:33.730 21:42:18 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:20:33.730 /tmp/:spdk-test:key1 00:20:33.730 21:42:18 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=84594 00:20:33.730 21:42:18 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:33.730 21:42:18 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 84594 00:20:33.730 21:42:18 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 84594 ']' 00:20:33.730 21:42:18 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:33.730 21:42:18 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:33.730 21:42:18 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:33.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:33.730 21:42:18 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:33.730 21:42:18 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:20:33.730 [2024-07-24 21:42:18.538766] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:20:33.730 [2024-07-24 21:42:18.538862] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84594 ] 00:20:33.730 [2024-07-24 21:42:18.679288] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:33.989 [2024-07-24 21:42:18.807579] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:33.989 [2024-07-24 21:42:18.863935] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:20:34.556 21:42:19 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:34.556 21:42:19 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:20:34.556 21:42:19 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:20:34.556 21:42:19 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.556 21:42:19 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:20:34.556 [2024-07-24 21:42:19.538373] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:34.815 null0 00:20:34.815 [2024-07-24 21:42:19.570344] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:34.815 [2024-07-24 21:42:19.570580] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:20:34.815 21:42:19 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.815 21:42:19 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:20:34.815 78371510 00:20:34.815 21:42:19 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:20:34.815 227113610 00:20:34.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:34.815 21:42:19 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=84611 00:20:34.815 21:42:19 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:20:34.815 21:42:19 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 84611 /var/tmp/bperf.sock 00:20:34.815 21:42:19 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 84611 ']' 00:20:34.815 21:42:19 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:34.815 21:42:19 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:34.815 21:42:19 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:34.815 21:42:19 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:34.815 21:42:19 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:20:34.815 [2024-07-24 21:42:19.655431] Starting SPDK v24.09-pre git sha1 68f798423 / DPDK 24.03.0 initialization... 00:20:34.815 [2024-07-24 21:42:19.655797] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84611 ] 00:20:34.815 [2024-07-24 21:42:19.793811] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:35.074 [2024-07-24 21:42:19.944280] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:35.642 21:42:20 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:35.642 21:42:20 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:20:35.642 21:42:20 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:20:35.642 21:42:20 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:20:35.901 21:42:20 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:20:35.901 21:42:20 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:20:36.468 [2024-07-24 21:42:21.210228] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:20:36.468 21:42:21 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:20:36.468 21:42:21 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:20:36.726 [2024-07-24 21:42:21.536733] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:36.726 nvme0n1 00:20:36.726 21:42:21 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:20:36.726 21:42:21 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:20:36.726 21:42:21 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:20:36.726 21:42:21 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:20:36.726 21:42:21 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:36.726 21:42:21 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:20:36.985 21:42:21 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:20:36.985 21:42:21 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:20:36.985 21:42:21 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:20:36.985 21:42:21 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:20:36.985 21:42:21 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:36.985 21:42:21 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:20:36.985 21:42:21 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:37.243 21:42:22 keyring_linux -- keyring/linux.sh@25 -- # sn=78371510 00:20:37.243 21:42:22 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:20:37.243 21:42:22 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:20:37.243 21:42:22 keyring_linux -- keyring/linux.sh@26 -- # [[ 78371510 == \7\8\3\7\1\5\1\0 ]] 00:20:37.243 21:42:22 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 78371510 00:20:37.243 21:42:22 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:20:37.243 21:42:22 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:37.501 Running I/O for 1 seconds... 00:20:38.436 00:20:38.436 Latency(us) 00:20:38.436 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:38.436 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:20:38.436 nvme0n1 : 1.01 11562.59 45.17 0.00 0.00 11000.78 6970.65 16443.58 00:20:38.436 =================================================================================================================== 00:20:38.436 Total : 11562.59 45.17 0.00 0.00 11000.78 6970.65 16443.58 00:20:38.436 0 00:20:38.436 21:42:23 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:20:38.436 21:42:23 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:20:38.693 21:42:23 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:20:38.693 21:42:23 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:20:38.693 21:42:23 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:20:38.693 21:42:23 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:20:38.693 21:42:23 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:20:38.693 21:42:23 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:38.951 21:42:23 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:20:38.951 21:42:23 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:20:38.951 21:42:23 keyring_linux -- keyring/linux.sh@23 -- # return 00:20:38.951 21:42:23 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:20:38.951 21:42:23 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:20:38.951 21:42:23 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:20:38.951 21:42:23 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:20:38.951 21:42:23 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:38.951 21:42:23 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:20:38.951 21:42:23 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:38.952 21:42:23 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:20:38.952 21:42:23 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:20:39.211 [2024-07-24 21:42:24.085180] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:39.211 [2024-07-24 21:42:24.085370] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2542460 (107): Transport endpoint is not connected 00:20:39.211 [2024-07-24 21:42:24.086361] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2542460 (9): Bad file descriptor 00:20:39.211 request: 00:20:39.211 { 00:20:39.211 "name": "nvme0", 00:20:39.211 "trtype": "tcp", 00:20:39.211 "traddr": "127.0.0.1", 00:20:39.211 "adrfam": "ipv4", 00:20:39.211 "trsvcid": "4420", 00:20:39.211 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:39.211 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:39.211 "prchk_reftag": false, 00:20:39.211 "prchk_guard": false, 00:20:39.211 "hdgst": false, 00:20:39.211 "ddgst": false, 00:20:39.211 "psk": ":spdk-test:key1", 00:20:39.211 "method": "bdev_nvme_attach_controller", 00:20:39.211 "req_id": 1 00:20:39.211 } 00:20:39.211 Got JSON-RPC error response 00:20:39.211 response: 00:20:39.211 { 00:20:39.211 "code": -5, 00:20:39.211 "message": "Input/output error" 00:20:39.211 } 00:20:39.211 [2024-07-24 21:42:24.087357] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:39.211 [2024-07-24 21:42:24.087381] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:20:39.211 [2024-07-24 21:42:24.087392] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:39.211 21:42:24 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:20:39.211 21:42:24 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:39.211 21:42:24 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:39.211 21:42:24 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:39.211 21:42:24 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:20:39.211 21:42:24 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:20:39.211 21:42:24 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:20:39.211 21:42:24 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:20:39.211 21:42:24 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:20:39.211 21:42:24 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:20:39.212 21:42:24 keyring_linux -- keyring/linux.sh@33 -- # sn=78371510 00:20:39.212 21:42:24 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 78371510 00:20:39.212 1 links removed 00:20:39.212 21:42:24 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:20:39.212 21:42:24 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:20:39.212 21:42:24 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:20:39.212 21:42:24 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:20:39.212 21:42:24 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:20:39.212 21:42:24 keyring_linux -- keyring/linux.sh@33 -- # sn=227113610 00:20:39.212 21:42:24 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 227113610 00:20:39.212 1 links removed 00:20:39.212 21:42:24 keyring_linux -- keyring/linux.sh@41 -- # killprocess 84611 00:20:39.212 21:42:24 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 84611 ']' 00:20:39.212 21:42:24 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 84611 00:20:39.212 21:42:24 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:20:39.212 21:42:24 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:39.212 21:42:24 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84611 00:20:39.212 killing process with pid 84611 00:20:39.212 Received shutdown signal, test time was about 1.000000 seconds 00:20:39.212 00:20:39.212 Latency(us) 00:20:39.212 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:39.212 =================================================================================================================== 00:20:39.212 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:39.212 21:42:24 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:39.212 21:42:24 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:39.212 21:42:24 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84611' 00:20:39.212 21:42:24 keyring_linux -- common/autotest_common.sh@969 -- # kill 84611 00:20:39.212 21:42:24 keyring_linux -- common/autotest_common.sh@974 -- # wait 84611 00:20:39.470 21:42:24 keyring_linux -- keyring/linux.sh@42 -- # killprocess 84594 00:20:39.470 21:42:24 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 84594 ']' 00:20:39.470 21:42:24 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 84594 00:20:39.470 21:42:24 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:20:39.470 21:42:24 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:39.470 21:42:24 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84594 00:20:39.470 21:42:24 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:39.470 21:42:24 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:39.470 killing process with pid 84594 00:20:39.470 21:42:24 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84594' 00:20:39.470 21:42:24 keyring_linux -- common/autotest_common.sh@969 -- # kill 84594 00:20:39.470 21:42:24 keyring_linux -- common/autotest_common.sh@974 -- # wait 84594 00:20:40.036 00:20:40.036 real 0m6.512s 00:20:40.036 user 0m12.815s 00:20:40.036 sys 0m1.531s 00:20:40.036 21:42:24 keyring_linux -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:40.036 21:42:24 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:20:40.036 ************************************ 00:20:40.036 END TEST keyring_linux 00:20:40.036 ************************************ 00:20:40.036 21:42:24 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:20:40.036 21:42:24 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:20:40.036 21:42:24 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:20:40.036 21:42:24 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:20:40.036 21:42:24 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:20:40.036 21:42:24 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:20:40.036 21:42:24 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:20:40.036 21:42:24 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:20:40.036 21:42:24 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:20:40.036 21:42:24 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:20:40.036 21:42:24 -- spdk/autotest.sh@360 -- # '[' 0 -eq 1 ']' 00:20:40.036 21:42:24 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:20:40.036 21:42:24 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:20:40.036 21:42:24 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:20:40.036 21:42:24 -- spdk/autotest.sh@379 -- # [[ 0 -eq 1 ]] 00:20:40.036 21:42:24 -- spdk/autotest.sh@384 -- # trap - SIGINT SIGTERM EXIT 00:20:40.036 21:42:24 -- spdk/autotest.sh@386 -- # timing_enter post_cleanup 00:20:40.036 21:42:24 -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:40.036 21:42:24 -- common/autotest_common.sh@10 -- # set +x 00:20:40.036 21:42:24 -- spdk/autotest.sh@387 -- # autotest_cleanup 00:20:40.036 21:42:24 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:20:40.036 21:42:24 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:20:40.036 21:42:24 -- common/autotest_common.sh@10 -- # set +x 00:20:41.412 INFO: APP EXITING 00:20:41.412 INFO: killing all VMs 00:20:41.412 INFO: killing vhost app 00:20:41.412 INFO: EXIT DONE 00:20:41.978 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:41.978 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:20:42.236 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:20:42.804 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:42.804 Cleaning 00:20:42.804 Removing: /var/run/dpdk/spdk0/config 00:20:42.804 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:20:42.804 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:20:42.804 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:20:42.804 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:20:42.804 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:20:42.804 Removing: /var/run/dpdk/spdk0/hugepage_info 00:20:42.804 Removing: /var/run/dpdk/spdk1/config 00:20:42.804 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:20:42.804 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:20:42.804 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:20:42.804 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:20:42.804 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:20:42.804 Removing: /var/run/dpdk/spdk1/hugepage_info 00:20:42.804 Removing: /var/run/dpdk/spdk2/config 00:20:42.804 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:20:42.804 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:20:42.804 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:20:42.804 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:20:42.804 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:20:42.804 Removing: /var/run/dpdk/spdk2/hugepage_info 00:20:42.804 Removing: /var/run/dpdk/spdk3/config 00:20:42.804 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:20:42.804 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:20:42.804 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:20:42.804 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:20:42.804 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:20:42.804 Removing: /var/run/dpdk/spdk3/hugepage_info 00:20:42.804 Removing: /var/run/dpdk/spdk4/config 00:20:42.804 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:20:42.804 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:20:42.804 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:20:42.804 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:20:42.804 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:20:42.804 Removing: /var/run/dpdk/spdk4/hugepage_info 00:20:42.804 Removing: /dev/shm/nvmf_trace.0 00:20:42.804 Removing: /dev/shm/spdk_tgt_trace.pid58693 00:20:42.804 Removing: /var/run/dpdk/spdk0 00:20:42.804 Removing: /var/run/dpdk/spdk1 00:20:42.804 Removing: /var/run/dpdk/spdk2 00:20:42.804 Removing: /var/run/dpdk/spdk3 00:20:43.063 Removing: /var/run/dpdk/spdk4 00:20:43.063 Removing: /var/run/dpdk/spdk_pid58548 00:20:43.063 Removing: /var/run/dpdk/spdk_pid58693 00:20:43.063 Removing: /var/run/dpdk/spdk_pid58891 00:20:43.063 Removing: /var/run/dpdk/spdk_pid58983 00:20:43.063 Removing: /var/run/dpdk/spdk_pid59010 00:20:43.063 Removing: /var/run/dpdk/spdk_pid59120 00:20:43.063 Removing: /var/run/dpdk/spdk_pid59138 00:20:43.063 Removing: /var/run/dpdk/spdk_pid59256 00:20:43.063 Removing: /var/run/dpdk/spdk_pid59457 00:20:43.063 Removing: /var/run/dpdk/spdk_pid59598 00:20:43.063 Removing: /var/run/dpdk/spdk_pid59669 00:20:43.063 Removing: /var/run/dpdk/spdk_pid59745 00:20:43.063 Removing: /var/run/dpdk/spdk_pid59836 00:20:43.063 Removing: /var/run/dpdk/spdk_pid59907 00:20:43.063 Removing: /var/run/dpdk/spdk_pid59946 00:20:43.063 Removing: /var/run/dpdk/spdk_pid59981 00:20:43.063 Removing: /var/run/dpdk/spdk_pid60043 00:20:43.063 Removing: /var/run/dpdk/spdk_pid60142 00:20:43.063 Removing: /var/run/dpdk/spdk_pid60575 00:20:43.063 Removing: /var/run/dpdk/spdk_pid60627 00:20:43.063 Removing: /var/run/dpdk/spdk_pid60678 00:20:43.063 Removing: /var/run/dpdk/spdk_pid60700 00:20:43.063 Removing: /var/run/dpdk/spdk_pid60772 00:20:43.063 Removing: /var/run/dpdk/spdk_pid60788 00:20:43.063 Removing: /var/run/dpdk/spdk_pid60866 00:20:43.063 Removing: /var/run/dpdk/spdk_pid60882 00:20:43.063 Removing: /var/run/dpdk/spdk_pid60928 00:20:43.063 Removing: /var/run/dpdk/spdk_pid60946 00:20:43.063 Removing: /var/run/dpdk/spdk_pid60991 00:20:43.063 Removing: /var/run/dpdk/spdk_pid61009 00:20:43.063 Removing: /var/run/dpdk/spdk_pid61132 00:20:43.063 Removing: /var/run/dpdk/spdk_pid61167 00:20:43.063 Removing: /var/run/dpdk/spdk_pid61242 00:20:43.063 Removing: /var/run/dpdk/spdk_pid61552 00:20:43.063 Removing: /var/run/dpdk/spdk_pid61569 00:20:43.063 Removing: /var/run/dpdk/spdk_pid61608 00:20:43.063 Removing: /var/run/dpdk/spdk_pid61627 00:20:43.063 Removing: /var/run/dpdk/spdk_pid61642 00:20:43.063 Removing: /var/run/dpdk/spdk_pid61667 00:20:43.063 Removing: /var/run/dpdk/spdk_pid61686 00:20:43.063 Removing: /var/run/dpdk/spdk_pid61707 00:20:43.063 Removing: /var/run/dpdk/spdk_pid61726 00:20:43.063 Removing: /var/run/dpdk/spdk_pid61745 00:20:43.063 Removing: /var/run/dpdk/spdk_pid61766 00:20:43.063 Removing: /var/run/dpdk/spdk_pid61785 00:20:43.063 Removing: /var/run/dpdk/spdk_pid61804 00:20:43.063 Removing: /var/run/dpdk/spdk_pid61825 00:20:43.063 Removing: /var/run/dpdk/spdk_pid61844 00:20:43.063 Removing: /var/run/dpdk/spdk_pid61863 00:20:43.063 Removing: /var/run/dpdk/spdk_pid61884 00:20:43.063 Removing: /var/run/dpdk/spdk_pid61903 00:20:43.063 Removing: /var/run/dpdk/spdk_pid61922 00:20:43.063 Removing: /var/run/dpdk/spdk_pid61938 00:20:43.063 Removing: /var/run/dpdk/spdk_pid61968 00:20:43.063 Removing: /var/run/dpdk/spdk_pid61989 00:20:43.063 Removing: /var/run/dpdk/spdk_pid62024 00:20:43.063 Removing: /var/run/dpdk/spdk_pid62088 00:20:43.064 Removing: /var/run/dpdk/spdk_pid62121 00:20:43.064 Removing: /var/run/dpdk/spdk_pid62127 00:20:43.064 Removing: /var/run/dpdk/spdk_pid62160 00:20:43.064 Removing: /var/run/dpdk/spdk_pid62175 00:20:43.064 Removing: /var/run/dpdk/spdk_pid62183 00:20:43.064 Removing: /var/run/dpdk/spdk_pid62225 00:20:43.064 Removing: /var/run/dpdk/spdk_pid62244 00:20:43.064 Removing: /var/run/dpdk/spdk_pid62275 00:20:43.064 Removing: /var/run/dpdk/spdk_pid62290 00:20:43.064 Removing: /var/run/dpdk/spdk_pid62299 00:20:43.064 Removing: /var/run/dpdk/spdk_pid62309 00:20:43.064 Removing: /var/run/dpdk/spdk_pid62324 00:20:43.064 Removing: /var/run/dpdk/spdk_pid62333 00:20:43.064 Removing: /var/run/dpdk/spdk_pid62343 00:20:43.064 Removing: /var/run/dpdk/spdk_pid62358 00:20:43.064 Removing: /var/run/dpdk/spdk_pid62392 00:20:43.064 Removing: /var/run/dpdk/spdk_pid62418 00:20:43.064 Removing: /var/run/dpdk/spdk_pid62428 00:20:43.064 Removing: /var/run/dpdk/spdk_pid62462 00:20:43.064 Removing: /var/run/dpdk/spdk_pid62477 00:20:43.064 Removing: /var/run/dpdk/spdk_pid62483 00:20:43.064 Removing: /var/run/dpdk/spdk_pid62525 00:20:43.064 Removing: /var/run/dpdk/spdk_pid62542 00:20:43.064 Removing: /var/run/dpdk/spdk_pid62575 00:20:43.064 Removing: /var/run/dpdk/spdk_pid62577 00:20:43.064 Removing: /var/run/dpdk/spdk_pid62590 00:20:43.064 Removing: /var/run/dpdk/spdk_pid62603 00:20:43.064 Removing: /var/run/dpdk/spdk_pid62617 00:20:43.064 Removing: /var/run/dpdk/spdk_pid62619 00:20:43.064 Removing: /var/run/dpdk/spdk_pid62632 00:20:43.064 Removing: /var/run/dpdk/spdk_pid62645 00:20:43.064 Removing: /var/run/dpdk/spdk_pid62719 00:20:43.064 Removing: /var/run/dpdk/spdk_pid62762 00:20:43.064 Removing: /var/run/dpdk/spdk_pid62877 00:20:43.064 Removing: /var/run/dpdk/spdk_pid62910 00:20:43.321 Removing: /var/run/dpdk/spdk_pid62961 00:20:43.321 Removing: /var/run/dpdk/spdk_pid62976 00:20:43.321 Removing: /var/run/dpdk/spdk_pid62998 00:20:43.321 Removing: /var/run/dpdk/spdk_pid63018 00:20:43.321 Removing: /var/run/dpdk/spdk_pid63050 00:20:43.321 Removing: /var/run/dpdk/spdk_pid63071 00:20:43.321 Removing: /var/run/dpdk/spdk_pid63141 00:20:43.321 Removing: /var/run/dpdk/spdk_pid63162 00:20:43.321 Removing: /var/run/dpdk/spdk_pid63212 00:20:43.321 Removing: /var/run/dpdk/spdk_pid63291 00:20:43.321 Removing: /var/run/dpdk/spdk_pid63361 00:20:43.321 Removing: /var/run/dpdk/spdk_pid63391 00:20:43.321 Removing: /var/run/dpdk/spdk_pid63483 00:20:43.321 Removing: /var/run/dpdk/spdk_pid63531 00:20:43.321 Removing: /var/run/dpdk/spdk_pid63563 00:20:43.321 Removing: /var/run/dpdk/spdk_pid63787 00:20:43.321 Removing: /var/run/dpdk/spdk_pid63885 00:20:43.321 Removing: /var/run/dpdk/spdk_pid63919 00:20:43.321 Removing: /var/run/dpdk/spdk_pid64262 00:20:43.321 Removing: /var/run/dpdk/spdk_pid64300 00:20:43.321 Removing: /var/run/dpdk/spdk_pid64596 00:20:43.321 Removing: /var/run/dpdk/spdk_pid65002 00:20:43.321 Removing: /var/run/dpdk/spdk_pid65270 00:20:43.321 Removing: /var/run/dpdk/spdk_pid66050 00:20:43.321 Removing: /var/run/dpdk/spdk_pid66874 00:20:43.321 Removing: /var/run/dpdk/spdk_pid66990 00:20:43.321 Removing: /var/run/dpdk/spdk_pid67058 00:20:43.321 Removing: /var/run/dpdk/spdk_pid68312 00:20:43.322 Removing: /var/run/dpdk/spdk_pid68570 00:20:43.322 Removing: /var/run/dpdk/spdk_pid71657 00:20:43.322 Removing: /var/run/dpdk/spdk_pid71953 00:20:43.322 Removing: /var/run/dpdk/spdk_pid72061 00:20:43.322 Removing: /var/run/dpdk/spdk_pid72200 00:20:43.322 Removing: /var/run/dpdk/spdk_pid72209 00:20:43.322 Removing: /var/run/dpdk/spdk_pid72242 00:20:43.322 Removing: /var/run/dpdk/spdk_pid72265 00:20:43.322 Removing: /var/run/dpdk/spdk_pid72353 00:20:43.322 Removing: /var/run/dpdk/spdk_pid72486 00:20:43.322 Removing: /var/run/dpdk/spdk_pid72632 00:20:43.322 Removing: /var/run/dpdk/spdk_pid72699 00:20:43.322 Removing: /var/run/dpdk/spdk_pid72887 00:20:43.322 Removing: /var/run/dpdk/spdk_pid72970 00:20:43.322 Removing: /var/run/dpdk/spdk_pid73050 00:20:43.322 Removing: /var/run/dpdk/spdk_pid73354 00:20:43.322 Removing: /var/run/dpdk/spdk_pid73762 00:20:43.322 Removing: /var/run/dpdk/spdk_pid73764 00:20:43.322 Removing: /var/run/dpdk/spdk_pid74039 00:20:43.322 Removing: /var/run/dpdk/spdk_pid74053 00:20:43.322 Removing: /var/run/dpdk/spdk_pid74077 00:20:43.322 Removing: /var/run/dpdk/spdk_pid74104 00:20:43.322 Removing: /var/run/dpdk/spdk_pid74109 00:20:43.322 Removing: /var/run/dpdk/spdk_pid74402 00:20:43.322 Removing: /var/run/dpdk/spdk_pid74455 00:20:43.322 Removing: /var/run/dpdk/spdk_pid74742 00:20:43.322 Removing: /var/run/dpdk/spdk_pid74945 00:20:43.322 Removing: /var/run/dpdk/spdk_pid75310 00:20:43.322 Removing: /var/run/dpdk/spdk_pid75813 00:20:43.322 Removing: /var/run/dpdk/spdk_pid76629 00:20:43.322 Removing: /var/run/dpdk/spdk_pid77218 00:20:43.322 Removing: /var/run/dpdk/spdk_pid77230 00:20:43.322 Removing: /var/run/dpdk/spdk_pid79144 00:20:43.322 Removing: /var/run/dpdk/spdk_pid79205 00:20:43.322 Removing: /var/run/dpdk/spdk_pid79271 00:20:43.322 Removing: /var/run/dpdk/spdk_pid79326 00:20:43.322 Removing: /var/run/dpdk/spdk_pid79447 00:20:43.322 Removing: /var/run/dpdk/spdk_pid79507 00:20:43.322 Removing: /var/run/dpdk/spdk_pid79566 00:20:43.322 Removing: /var/run/dpdk/spdk_pid79622 00:20:43.322 Removing: /var/run/dpdk/spdk_pid79937 00:20:43.322 Removing: /var/run/dpdk/spdk_pid81101 00:20:43.322 Removing: /var/run/dpdk/spdk_pid81245 00:20:43.322 Removing: /var/run/dpdk/spdk_pid81482 00:20:43.322 Removing: /var/run/dpdk/spdk_pid82037 00:20:43.322 Removing: /var/run/dpdk/spdk_pid82200 00:20:43.322 Removing: /var/run/dpdk/spdk_pid82358 00:20:43.322 Removing: /var/run/dpdk/spdk_pid82455 00:20:43.322 Removing: /var/run/dpdk/spdk_pid82616 00:20:43.322 Removing: /var/run/dpdk/spdk_pid82725 00:20:43.322 Removing: /var/run/dpdk/spdk_pid83383 00:20:43.322 Removing: /var/run/dpdk/spdk_pid83414 00:20:43.322 Removing: /var/run/dpdk/spdk_pid83455 00:20:43.322 Removing: /var/run/dpdk/spdk_pid83708 00:20:43.322 Removing: /var/run/dpdk/spdk_pid83738 00:20:43.322 Removing: /var/run/dpdk/spdk_pid83773 00:20:43.322 Removing: /var/run/dpdk/spdk_pid84204 00:20:43.322 Removing: /var/run/dpdk/spdk_pid84221 00:20:43.322 Removing: /var/run/dpdk/spdk_pid84476 00:20:43.322 Removing: /var/run/dpdk/spdk_pid84594 00:20:43.322 Removing: /var/run/dpdk/spdk_pid84611 00:20:43.322 Clean 00:20:43.580 21:42:28 -- common/autotest_common.sh@1451 -- # return 0 00:20:43.580 21:42:28 -- spdk/autotest.sh@388 -- # timing_exit post_cleanup 00:20:43.580 21:42:28 -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:43.580 21:42:28 -- common/autotest_common.sh@10 -- # set +x 00:20:43.580 21:42:28 -- spdk/autotest.sh@390 -- # timing_exit autotest 00:20:43.580 21:42:28 -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:43.580 21:42:28 -- common/autotest_common.sh@10 -- # set +x 00:20:43.580 21:42:28 -- spdk/autotest.sh@391 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:20:43.580 21:42:28 -- spdk/autotest.sh@393 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:20:43.580 21:42:28 -- spdk/autotest.sh@393 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:20:43.580 21:42:28 -- spdk/autotest.sh@395 -- # hash lcov 00:20:43.580 21:42:28 -- spdk/autotest.sh@395 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:20:43.580 21:42:28 -- spdk/autotest.sh@397 -- # hostname 00:20:43.580 21:42:28 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1716830599-074-updated-1705279005 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:20:43.837 geninfo: WARNING: invalid characters removed from testname! 00:21:15.959 21:42:55 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:15.959 21:42:59 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:17.330 21:43:02 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:19.888 21:43:04 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:23.171 21:43:07 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:25.700 21:43:10 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:28.232 21:43:12 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:21:28.232 21:43:12 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:28.232 21:43:12 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:21:28.232 21:43:12 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:28.232 21:43:12 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:28.232 21:43:12 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:28.232 21:43:12 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:28.232 21:43:12 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:28.232 21:43:12 -- paths/export.sh@5 -- $ export PATH 00:21:28.232 21:43:12 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:28.232 21:43:12 -- common/autobuild_common.sh@446 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:21:28.232 21:43:12 -- common/autobuild_common.sh@447 -- $ date +%s 00:21:28.232 21:43:12 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721857392.XXXXXX 00:21:28.232 21:43:12 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721857392.YyS1uz 00:21:28.232 21:43:12 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:21:28.232 21:43:12 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:21:28.232 21:43:12 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:21:28.232 21:43:12 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:21:28.232 21:43:12 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:21:28.232 21:43:12 -- common/autobuild_common.sh@463 -- $ get_config_params 00:21:28.232 21:43:12 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:21:28.232 21:43:12 -- common/autotest_common.sh@10 -- $ set +x 00:21:28.232 21:43:12 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring' 00:21:28.232 21:43:12 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:21:28.232 21:43:12 -- pm/common@17 -- $ local monitor 00:21:28.232 21:43:12 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:21:28.232 21:43:12 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:21:28.232 21:43:12 -- pm/common@25 -- $ sleep 1 00:21:28.232 21:43:12 -- pm/common@21 -- $ date +%s 00:21:28.232 21:43:12 -- pm/common@21 -- $ date +%s 00:21:28.232 21:43:12 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721857392 00:21:28.232 21:43:12 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721857392 00:21:28.232 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721857392_collect-vmstat.pm.log 00:21:28.232 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721857392_collect-cpu-load.pm.log 00:21:29.168 21:43:13 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:21:29.168 21:43:13 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:21:29.168 21:43:13 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:21:29.168 21:43:13 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:21:29.168 21:43:13 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:21:29.168 21:43:13 -- spdk/autopackage.sh@19 -- $ timing_finish 00:21:29.168 21:43:13 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:21:29.168 21:43:13 -- common/autotest_common.sh@737 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:21:29.168 21:43:13 -- common/autotest_common.sh@739 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:21:29.168 21:43:13 -- spdk/autopackage.sh@20 -- $ exit 0 00:21:29.168 21:43:13 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:21:29.168 21:43:13 -- pm/common@29 -- $ signal_monitor_resources TERM 00:21:29.168 21:43:13 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:21:29.168 21:43:13 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:21:29.168 21:43:13 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:21:29.168 21:43:13 -- pm/common@44 -- $ pid=86326 00:21:29.168 21:43:13 -- pm/common@50 -- $ kill -TERM 86326 00:21:29.168 21:43:13 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:21:29.168 21:43:13 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:21:29.168 21:43:13 -- pm/common@44 -- $ pid=86328 00:21:29.168 21:43:13 -- pm/common@50 -- $ kill -TERM 86328 00:21:29.168 + [[ -n 5103 ]] 00:21:29.168 + sudo kill 5103 00:21:29.178 [Pipeline] } 00:21:29.197 [Pipeline] // timeout 00:21:29.204 [Pipeline] } 00:21:29.223 [Pipeline] // stage 00:21:29.230 [Pipeline] } 00:21:29.248 [Pipeline] // catchError 00:21:29.259 [Pipeline] stage 00:21:29.261 [Pipeline] { (Stop VM) 00:21:29.274 [Pipeline] sh 00:21:29.549 + vagrant halt 00:21:33.737 ==> default: Halting domain... 00:21:39.015 [Pipeline] sh 00:21:39.293 + vagrant destroy -f 00:21:43.476 ==> default: Removing domain... 00:21:43.489 [Pipeline] sh 00:21:43.768 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/output 00:21:43.777 [Pipeline] } 00:21:43.795 [Pipeline] // stage 00:21:43.801 [Pipeline] } 00:21:43.820 [Pipeline] // dir 00:21:43.825 [Pipeline] } 00:21:43.843 [Pipeline] // wrap 00:21:43.850 [Pipeline] } 00:21:43.866 [Pipeline] // catchError 00:21:43.876 [Pipeline] stage 00:21:43.879 [Pipeline] { (Epilogue) 00:21:43.894 [Pipeline] sh 00:21:44.172 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:21:50.779 [Pipeline] catchError 00:21:50.781 [Pipeline] { 00:21:50.798 [Pipeline] sh 00:21:51.078 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:21:51.078 Artifacts sizes are good 00:21:51.087 [Pipeline] } 00:21:51.104 [Pipeline] // catchError 00:21:51.118 [Pipeline] archiveArtifacts 00:21:51.125 Archiving artifacts 00:21:51.257 [Pipeline] cleanWs 00:21:51.269 [WS-CLEANUP] Deleting project workspace... 00:21:51.269 [WS-CLEANUP] Deferred wipeout is used... 00:21:51.275 [WS-CLEANUP] done 00:21:51.277 [Pipeline] } 00:21:51.295 [Pipeline] // stage 00:21:51.300 [Pipeline] } 00:21:51.316 [Pipeline] // node 00:21:51.322 [Pipeline] End of Pipeline 00:21:51.360 Finished: SUCCESS